Thursday, September 30, 2010

Is Cataract Associated with Cardiovascular Morbidity?

From Eye

AY Nemet; S Vinker; S Levartovsky; I Kaiserman

Abstract

Aims To evaluate the prevalence of cardiovascular disease (CVD) and its risk factors among patients undergoing cataract surgery.

Methods A retrospective observational case–control study of all the members older than 50 years who underwent cataract surgery in the Central District of Clalit Health Services in Israel (years 2000–2007) (n=12 984) and 25 968 age- and gender-matched controls. We calculated the prevalence of CVDs' and their risk factors, including carotid artery disease (CAD), peripheral vascular disease (PVD), systemic arterial hypertension (HTN), chronic renal failure (CRF), ischaemic heart disease (IHD), congestive heart failure, diabetes, smoking, alcohol abuse, and hyperlipidaemia. The main outcome measures were the odds ratio of having CVDs among cataract patients undergoing surgery compared with controls.

Results No difference was found in demographics (age, gender, marriage status, socioeconomic class, and living place) between the study and control groups. All CVDs' risk factors were significantly more prevalent in cataract patients in univariate analysis. Multivariate logistic regression analysis revealed a significant association of the following with cataractogenesis: diabetes, CAD, HTN, PVD, smoking, IHD, CRF, hyperlipidaemia, and Ashkenazi origin.

Conclusions CVDs and their risk factors are more prevalent among cataract patients undergoing cataract surgery.

Introduction

It is generally acknowledged that age-related (senile) cataract is a multifactorial disease. Epidemiologic studies of this disease have suggested many risk factors for cataract.[1] Diabetes, glaucoma, several analgesics, myopia, renal failure, smoking, heavy alcohol consumption, hypertension (HTN), low body mass index, use of cheaper cooking fuel, working in direct sunlight, family history of cataract, occupational exposure, and several biochemical factors are just a partial list of the potential risk factors for cataract.[2] Interactions among the risk factors can mask the real contribution of each in the development of the diseases. An intriguing link between development of age-related cataracts and increased future risk of coronary heart disease has been suggested.[3]

As free radical-mediated oxidative damage to lipoproteins may accelerate atherosclerosis as well as cataract formation,[4] the development of cataract might be a marker for such damage and, therefore, might be associated with an increased risk of coronary heart disease.[3]

Although some studies support the association of cataract and cardiovascular diseases (CVDs),[5] other studies reporting cataract risk factors do not mention CVD. Klein et al [6] in the Beaver Dam Eye study reported that CVDs and their risk factors have little effect on the incidence of age-related cataract.

This study aims to evaluate the prevalence of cardiovascular morbidity among 12 984 cataract patients undergoing cataract surgery in Israel.

Anaphylaxis and Insect Allergy

From Current Opinion in Allergy and Clinical Immunology
Jeffrey G. Demain; Ashley A. Minaei; James M. Tracy

Abstract

Anaphylaxis is an acute-onset and potentially life-threatening allergic reaction that can be caused by numerous allergic triggers including stinging insects. This review focuses on recent advances, natural history, risk factors and therapeutic considerations.

Recent findings Recent work suggests that concerns over insect allergy diagnosis continue to exist. This is especially true with individuals who have a convincing history of a serious lifethreatening anaphylactic event, but lack the necessary diagnostic criteria of venomspecific IgE by skin test or in-vitro diagnostic methods to confirm the diagnosis. The role of occult mastocytosis or increased basophile reactivity may play a role in this subset population. Additionally, epinephrine continues to be underutilized as the primary acute intervention for an anaphylactic reaction in the emergent setting.

Summary The incidence of anaphylaxis continues to rise across all demographic groups, especially those less than 20 years of age. Fortunately, the fatalities related to anaphylaxis appear to have decreased over the past decades. Our understanding of various triggers, associated risk factors, as well as an improved understanding and utilization of biological markers such as serum tryptase have improved. Our ability to treat insect anaphylaxis by venom immunotherapy is highly effective. Unfortunately, anaphylaxis continues to be underappreciated and undertreated especially in regard to insect sting anaphylaxis. This includes the appropriate use of injectable epinephrine as the primary acute management tool. These findings suggest that continued education of the general population, primary care healthcare providers and emergency departments is required.
Introduction

Anaphylaxis is an acute-onset, potentially fatal systemic allergic reaction.[1,2] Anaphylaxis can be triggered in numerous ways, but the three most common triggers are insect stings, foods, and medications.[3•,4–6] Manivannan et al. reviewed 208 patients and found that the inciting agents broke down into food (29.6%), medications (22.2%), insects (11.1%), others (7.4%), and unknown (29.6%). However, since large numbers of partially treated episodes often go undiagnosed or unrecognized, it is likely that anaphylaxis is under-reported. No one knows the true rates of anaphylaxis in general, although overall global trends indicate increasing rates in all age groups and populations.The increase is most significant in people living in good socioeconomic conditions and people under the age of 20. The largest number of anaphylaxis cases typically occurs in children and adolescents; however, fatalities from insect stings are more common in middle-aged and older adults.

Stress and Neural Wreckage: Part of the Brain Plasticity Puzzle

Feb 5, 2008
By: Alvaro Fernandez

“My brain is fried, toast, fraz­zled, burnt out. How many times have you said or heard one ver­sion or another of these state­ments. Most of us think we are being fig­u­ra­tive when we utter such phrases, but research shows that the bio­log­i­cal con­se­quences of sus­tained high lev­els of stress may have us being more accu­rate than we would like to think.

Crash Course on Stress

Our bod­ies are a com­plex bal­anc­ing act between sys­tems work­ing full time to keep us alive and well. This bal­anc­ing act is con­stantly adapt­ing to the myr­iad of changes occur­ring every sec­ond within our­selves and our envi­ron­ments. When it gets dark our pupils dilate, when we get hot we sweat, when we smell food we sali­vate, and so forth. This con­stant bal­anc­ing act main­tains a range of sta­bil­ity in the body via change; and is often referred to as allosta­sis. Any change which threat­ens this bal­ance can be referred to as allo­sta­tic load or stress.

Allo­sta­tic load/stress is part of being alive. For exam­ple just by get­ting up in the morn­ing, we all expe­ri­ence a very impor­tant need to increase our heart rate and blood pres­sure in order to feed our newly ele­vated brain. Although usu­ally man­age­able, this is a change which the body needs to adapt to and, by our def­i­n­i­tion, a stressor.

Stress is only a prob­lem when this allo­sta­tic load becomes over­load. When change is exces­sive or our abil­ity to adapt is com­pro­mised, things start to go wrong. We will focus here on what seems to be hap­pen­ing in the brain under such conditions.

Energy Mobi­liza­tion

Whether it’s get­ting up in the morn­ing, wor­ry­ing about the non-existent past/future, or get­ting angry at your last park­ing ticket, stress takes energy. One of the major roles of the infa­mous fight or flight response is to mobi­lize energy, and it does this well. If you need to run away from a swarm of killer bees or fend off an attack­ing bear, you will be assisted by var­i­ous chem­i­cals pro­duced within the body. These include the well-known adrenaline–now more com­monly referred to as epinephrine–and a lesser known group of chem­i­cals known as the glu­co­cor­ti­coids, most notably cor­ti­sol. Both epi­neph­rine and the glu­co­cor­ti­coids are involved in mak­ing stored energy avail­able for use in the form of fats and sug­ars. Epi­neph­rine does so over the short term (within sec­onds) while glu­co­cor­ti­coids act over a longer period (min­utes to hours). Let’s look at the effects of the later of the two, the glucocorticoids.

Your Brain on Stress

Cor­ti­sol, the most promi­nent of the glu­co­cor­ti­coids, does an excel­lent job of allow­ing us to adapt to most stres­sors which last more than a cou­ple of min­utes but under an hour. Short term it will actu­ally enhance our immune sys­tem, mem­ory and atten­tion. Long term, past ½ hour to an hour, exces­sively ele­vated cor­ti­sol lev­els start to have detri­men­tal effects. It seems we were designed more to deal with short spurts of high stress, such as beat­ing back that attack­ing bear, rather than long drawn-out stres­sors such as meet­ing deadlines.

Our brains appear to be most vul­ner­a­ble to the effects of exces­sive stress in a region called the hip­pocam­pus. The hip­pocam­pus is a mass of neu­rons each with mul­ti­ple branch-like exten­sions (den­drites and axons) which make con­nec­tions (synapses) with other neu­rons all across the brain. Among other things, this region is impor­tant in deal­ing with emo­tions and con­sol­i­dat­ing new mem­o­ries. As with all brain regions, its abil­ity to adapt relies upon being able to alter the branch­ing and con­nec­tions of its neu­rons. The hip­pocam­pus is also one of the only regions of the brain known to be able to pro­duce new neu­rons, a process called neurogenesis.

Brain Dam­age

Endur­ing a high stres­sor for more than 30 min­utes to an hour has been shown to neg­a­tively impact the hip­pocam­pus in var­i­ous ways.
To begin, sus­tained expo­sure to higher than nor­mal lev­els of cor­ti­sol results in the prun­ing back of the num­ber of branches and synap­tic con­nec­tions of hip­pocam­pal neu­rons. By a vari­ety of mech­a­nisms, these con­di­tions also increase the rate of cell death in this region of the brain.

As if this wasn’t bad enough, recent research is also demon­strat­ing that sus­tained increases in glu­co­cor­ti­coid lev­els also has neg­a­tive effects, impair­ing the hippocampus’s abil­ity to cre­ate new neurons.

Over a period of time, all of this results in the shrink­ing in size of the hip­pocam­pus with asso­ci­ated declines in cog­ni­tive func­tion, includ­ing the abil­ity to retain new infor­ma­tion and adapt to novel situations.

Dam­age Control

For­tu­nately the neg­a­tive effects of exces­sive stress can not only be stopped but also reversed once the source (psy­cho­log­i­cal or phys­i­cal) is removed or suf­fi­ciently reduced. Next time we will explore tech­niques one can use to pro­tect our brains by man­ag­ing the unavoid­able stres­sors we all face as part of being human.

For more indepth infor­ma­tion fea­tur­ing one of the lead­ing expert researchers on the sub­ject of stress, check out the fol­low­ing video (1 hour 20 min­utes): Robert Sapol­sky.

— Gre­gory Kel­lett has a mas­ters in Cog­ni­tive Neurology/Research Psy­chol­ogy from SFSU and is a researcher at UCSF where he cur­rently inves­ti­gates the psy­chophys­i­ol­ogy of social stress. He wrote this arti­cle for SharpBrains.com to con­tribute to our pub­lic edu­ca­tion initiative.

———————————————-

Physical Exercise and Brain Health

Jun 26, 2008
By: Dr. Pascale Michelon

Have you heard of or read John Ratey’s book “Spark: The Rev­o­lu­tion­ary New Sci­ence of Exer­cise and The Brain”? Accord­ing to Har­vard Psy­chi­a­try Pro­fes­sor John Ratey noth­ing beats exer­cise for pro­mot­ing brain heath.

I am sure you have also heard that exer­cis­ing your mind pro­motes brain health.

What is the con­nec­tion between phys­i­cal and men­tal exer­cises? Do they have addi­tive effects on brain health? Are they redundant?

Let’s start by review­ing what we know about the effects of phys­i­cal exer­cise on the brain.

The effect of phys­i­cal exer­cise on cog­ni­tive performance

Early stud­ies com­pared groups of peo­ple who exer­cised to groups of peo­ple who did not exer­cise much. Results showed that peo­ple who exer­cised usu­ally had bet­ter per­for­mance in a range of cog­ni­tive tasks com­pared to non-exercisers.

Lau­rin and col­leagues (2001) even sug­gested that mod­er­ate and high lev­els of phys­i­cal activ­ity were asso­ci­ated with lower risk for Alzheimer’s dis­ease and other dementias.

The prob­lem with these stud­ies is that the exer­cis­ers and the non-exercisers may dif­fer on other fac­tors than just exer­cise. The advan­tage that exer­ciser show may not come from exer­cis­ing but from other fac­tors such as more resources, bet­ter brain health to start with, bet­ter diet, etc.

The solu­tion to this prob­lem is to ran­domly assigned peo­ple to either an aer­o­bic train­ing group or a con­trol group. If the exer­ciser group and the non-exerciser group are very sim­i­lar to start with and if the exer­ciser group shows less decline or bet­ter per­for­mance over time than the non-exerciser group, then one can con­clude that phys­i­cal exer­cise is ben­e­fi­cial for brain health.

In 2003, Col­combe and Kramer, ana­lyzed the results of 18 sci­en­tific stud­ies pub­lished between 2000 and 2001 that were con­ducted in the way described above.

The results of this meta-analysis clearly showed that fit­ness train­ing increases cog­ni­tive per­for­mance in healthy adults between the ages of 55 and 80.

Another meta-analysis pub­lished in 2004 by Heyn and col­leagues shows sim­i­lar ben­e­fi­cial effects of fit­ness train­ing on peo­ple over 65 years old who had cog­ni­tive impair­ment or dementia.

What is the effect of fit­ness train­ing on the brain itself?

Research with ani­mals has shown that in mice, increased aer­o­bic fit­ness (run­ning) can increase the num­ber of new cells formed in the hip­pocam­pus (the hip­pocam­pus is cru­cial for learn­ing and mem­ory). Increased exer­cise also has a ben­e­fi­cial effect on mice’s vas­cu­lar system.

Only one study has used brain imag­ing to look at the effect of fit­ness on the human brain. In 2006, Col­combe and col­leagues ran­domly assigned 59 older adults to either a car­dio­vas­cu­lar exer­cise group, or a non­aer­o­bic exer­cise con­trol group (stretch­ing and ton­ing exer­cise). Par­tic­i­pants exer­cised 3h per week for 6 months. Col­combe et al. scanned the par­tic­i­pants’ brains before and after the train­ing period.

After 6 months, the brain vol­ume of the aer­o­bic exer­cis­ing group increased in sev­eral areas com­pared to the other group. Vol­ume increase occurred prin­ci­pally in frontal and tem­po­ral areas of the brain involved in exec­u­tive con­trol and mem­ory processes. The authors do not know what under­ly­ing cel­lu­lar changes might have caused these vol­ume changes. How­ever they sus­pect, based on ani­mal research, that vol­ume changes may be due to an increased num­ber of blood ves­sels and an increased num­ber of con­nec­tions between neurons.

How does phys­i­cal exer­cise com­pare to men­tal exercise?

Very few stud­ies have tried to com­pare the effect of phys­i­cal exer­cise and men­tal exer­cise on cog­ni­tive performance.
When look­ing at each domain of research one notices the fol­low­ing differences:

- The effects of cog­ni­tive or men­tal exer­cise on per­for­mance seem to be very task spe­cific, that is trained tasks ben­e­fit from train­ing but the ben­e­fits do not trans­fer very well to tasks in which one was not trained.

- The effects of phys­i­cal exer­cise on per­for­mance seem broader. How­ever they do not gen­er­al­ize to all tasks. They ben­e­fit mostly tasks that involve executive-control com­po­nents (that is, tasks that require plan­ning, work­ing mem­ory, mul­ti­task­ing, resis­tance to distraction).

To my knowl­edge only one study tried to directly com­pare cog­ni­tive and fit­ness training:

Fabre and col­leagues, in 1999, ran­domly assigned sub­jects to 4 groups: an aer­o­bic train­ing group (walk­ing or run­ning for 2 h per week for 2 months), a mem­ory train­ing group (one 90 min ses­sion a week for 2 months), a com­bined aer­o­bic and men­tal train­ing group, or a con­trol group (no training).

Results showed that com­pared to the con­trol group, the mem­ory per­for­mance of all 3 groups increased. The com­bined group showed greater increase than the other 2 train­ing groups.

This sug­gests that the effects of cog­ni­tive and fit­ness train­ing may be addi­tive. How­ever this study involved only 8 par­tic­i­pants per group! More research is clearly needed before any­thing can be safely concluded.

In the mean­time let’s play it safe and com­bine fit­ness and cog­ni­tive train­ing for bet­ter brain health.

Tuesday, September 28, 2010

Prescription and Nonprescription Medications Implicated in Overdose Suicides

From Medscape Medical News
Kate Johnson

September 27, 2010 (Toronto, Ontario) — Psychotropic and other prescription drugs, as well as over-the-counter medications, are frequently used in overdose suicides, according to preliminary findings from a study reported at here at the Canadian Psychiatric Association 60th Annual Conference.

"The key point is that physicians prescribing certain classes of medications should be extravigilant if a patient is depressed or suicidal," said Mark Sinyor, MD, a fourth-year resident at the Department of Psychiatry, University of Toronto in Ontario, Canada.

A medical record review revealed 397 overdose suicides in Toronto during a 10-year period (1998-2007), which were divided evenly between men and women.

Medical examiner reports showed that pain killers, most commonly opioids, either alone or in combination with other drugs, were implicated in almost one-third (n = 112) of these suicides.

The second most common class of drugs was sedative hypnotics/benzodiazepines, which were implicated, either alone or in combination, in 105 cases.

Over-the-counter medications were the third most common source of overdose (n = 85), in particular diphenhydramine.

Tricyclic antidepressants came fourth on the list, either alone or in combination with other drugs (n = 81), with amitriptyline being the most common choice.

Newer Antidepressants Possibly the Most Lethal

In 10 cases, the only drug considered to be present in a lethal amount was a newer antidepressant," said Dr. Sinyor, noting a recent study showing that of the selective serotonin reuptake inhibitors (SSRIs) and newer antidepressants, citalopram, venlafaxine, and mirtazapine are potentially the most lethal in overdose (Br J Psychiatry. 2010;196:354-358).

"In 7 of the 10 cases at least one of those drugs was present," he added.

Most suicides occurred in 40- to 59-year-olds, with 50% of all male and 45% of all female suicides falling into this age group.

This supports 2007 data from the US National Violent Deaths Reporting study that found the highest rate for suicide in 45- to 54-year-olds.

The most common psychiatric diagnoses in the cohort included major depressive disorder, substance abuse disorders, bipolar disorder, and schizophrenia.

However, all diagnoses, including others such as anxiety disorders and personality disorders, were most likely underreported, said co–lead investigator Andrew Howlett, MD, also a fourth-year resident at the University of Toronto.

"There was very little evidence in the charts [about diagnoses], but compared to previous research, we think these other diagnoses were underreported," he added.

Most suicides occurred in people who were unmarried (79%) and who overdosed at home (71%).

Forty-three percent of the cohort had medical conditions (unspecified), 42% had made a previous suicide attempt, and only 38% had left a suicide note.

Important Reminder

Predictors for overdose with specific psychotropic medications were not obvious. However, individuals with a general medical condition were twice as likely to overdose with nonpsychotropic medications compared with healthy individuals.

For over-the-counter overdose, predictive factors were being unmarried (odds ratio, 3.1; P < .01) and no previous suicide attempts (odds ratio, 2.7; P < .01).

"What's kind of interesting is that if they had a prior suicide attempt they were less likely to leave a note [P < .01]," said Dr. Sinyor. He noted that with almost two-thirds (62%) not leaving a suicide note and a large percentage (42%) with a history of suicide attempts, many cases are suggestive of chronic impulsivity.

"It seems like a large percentage of these were impulsive acts. It wasn't part of our research, but reading through the charts there were tons of personal crises, often relationships breaking up, that tended to precede the overdose.

"I certainly think it's a reminder to those of us prescribing psychotropic medications," said session moderator Jennifer Brasch, MD, medical director of Psychiatric Emergency Services at St. Joseph's Healthcare and associate professor in the Department of Psychiatry and Behavioral Neurosciences at McMaster University in Hamilton, Ontario, Canada.

"There are so many different medications in involved in the use of overdose. Tricyclics are well known to be highly toxic, but there were a number of SSRIs implicated in deaths by overdose, and it's a reminder to healthcare providers to be cautious in prescribing and to limit the number of pills that a patient has available."

In addition to awareness about prescribing medications, physicians should also note that a significant number of suicides involved diphenhydramine, added Dr. Sinyor.

"Diphenhydramine is the active ingredient in Benadryl, but it's also the active ingredient in SleepEze and Nytol and the newer sleep supplements. Diphenhydramine is a readily available drug that is now frequently used for overdose — at least in Toronto."

Canadian Psychiatric Association (CPA) 60th Annual Conference: Abstract PS1d. Presented September 23, 2010.

What Are the Risks to Donors in Living Liver Transplantation?

From Medscape Gastroenterology > Ask the Experts

William F. Balistreri, MD

This question may have been prompted by the recent publicity[1] about the death of a man in Colorado after donating part of his liver to his brother. Subsequently, all live donor liver transplants were temporarily suspended at that hospital.
If the death is ultimately attributed to the procedure it would be the fourth such death in the United States, according to the United Network of Organ Sharing (UNOS) and the second such death this year. This comes at a time when living liver donation has fallen to less than 50% of the number of live donations performed at the peak in 2001, currently accounting for approximately 3% of all adult liver transplants in the United States.[2,3]

Live donor liver transplantation, specifically, performance of a major hepatectomy on a healthy individual who has no medical indication other than offering an allograft liver for the recipient, has long been viewed with caution and skepticism.[4] Therefore, substantial efforts are expended to ensure the safety and long-term well being of the donor. The best outcomes for both donors and recipients have been maximized via technical refinements, innovative pre- and postoperative management strategies, and careful follow-up.

Why are living donors needed? The number of patients awaiting liver transplantation in the United States (approximately 15,000) greatly exceeds the supply of cadaver donor organs (approximately 4500/year), according to the UNOS registry. Thus the waiting time for liver transplantation and death on the waiting list have increased in recent years. This growing disparity between the demand for liver transplant and the supply of deceased donor organs was the stimulus for the development of living donor liver transplantation. Initial success with transplantation of live donor liver segments into children and the use of cadaver donor split-liver transplantation evolved into adult-to-adult living donor liver transplantation (A2ALL) using the right portion of the liver.[5] The multicenter A2ALL consortium has provided evidence that adult living donor liver transplantation is a viable option for liver replacement and approximately 300 A2ALL procedures have been performed in the United States.[6]

Living donor liver transplantation has several advantages over cadaver donor transplantation, including transplantation in a scheduled, elective, and timely fashion, permitting areduction in waiting time and waiting list morbidity and mortality.[4] In addition, donors may derive a psychological benefit from the fact that they were able to help the recipients; the donors express a sense of deep satisfaction from giving a loved one their "gift of life."[7]Nevertheless,every effort is made to ensure that donors who choose to proceed are free from coercion. The transplant team provides recipients, potential donors, and families with counseling and support through every phase of the process. A donor advocate, independent from the team, is available to ensure that the donor understands the issues involved with donation and excludes donors who have issues related to personal life, work, or finances that would be affected by their serving as a donor. Potential living donors then undergo an extensive evaluation to ensure that they are in optimal medical condition to proceed with organ donation. Medical evaluation also carries risks.[8,9]

Following living donor liver transplantation, the donor's liver regenerates to full size within a few weeks, without long-term impairment of liver function. Most donors are hospitalized briefly (about 1 week after the donation) and recover completely. However, as illustrated by the Colorado case, this procedure is not without risks. The estimated risk of mortality is 0.5 to 1%. Overall donor morbidity is high, estimated to be roughly 35%.[10] This is usually related to the surgical incision and the possibility of blood clots; other reported problems include bleeding, infection, bile leaks, damage to the bile tree, or risks from anesthesia. Donors have reported chronic problems, including bile strictures, reoperations, and chronic pain.

The most common postoperative complications among donors for living donor liver transplant involve the biliary tract; the incidence of biliary complications in donors is approximately 5%.[11] Most of the biliary complications can be treated by nonsurgical methods or interventional procedures. Guba and colleagues performed a single-center, retrospective review of all patients brought to the operating room for donor hepatectomy.[12] Of 257 right lobe donors, the donor operation was aborted in 5% primarily because of aberrant biliary ductal or vascular anatomy or unsuitable liver quality.

In rare cases, if the remaining liver is damaged, the donor may also need a liver transplant. Organ Procurement and Transplant Network data reveal that 5 of 3632 (0.1%) living liver donors were subsequently listed for liver transplant.[13] One living donor died after being placed on the waiting list, 3 candidates received deceased donor liver transplants within 4 days after listing, and one candidate was removed from the waiting list following improved health.

In several recent case series, the rate of complications in living liver donors has been tracked. The Clavien 5-tier grading system was applied retrospectively by Marsh and colleagues to determine the incidence of potentially (grade 3), actually (grade 4), or ultimately fatal (grade 5) complications during the first post-transplant year.[14] All 121 donors survived; however, 13 donors (11%) had grade 3 (n = 9) or IV (n = 4) complications.Adcock and associates analyzed the outcomes of the first 202 consecutive donors performed at their center.[15] Donor survival was 100%; however, 40% of the donors experienced a medical complication during the first year after surgery (21 grade 1, 27 grade 2, 32 grade 3). All donors returned to predonation employment or studies, although 4 donors (2%) experienced a psychiatric complication. Fernandes and colleagues analyzed the outcomes of live liver donors in a single Brazilian center.[16] None of the 74 donors experienced life-threatening complications or died; however, 28 complications were observed in 26 patients: grade 1 (n = 11; 40%); grade 2 (n = 8; 29%); and grade 3 (n = 9; 32%). No patient presented with grade 4 or 5. In a study of 1262 living donors by Iida and associates, the overall complication rate and the severity were significantly higher in right and extended right lobe graft donors than in other donors (44% vs. 19%, P < 0.05).[17] Donor age and prolonged operation time were also found in multivariate analyses to be independent risk factors for complications.

In summary, the risk of dying from living liver donor surgery may be as high as 0.5% when donating the right lobe. If all complications are included, 1 of every 3 donors will experience a complication; most are considered minor or have no permanent sequelae.[18] Thus, centers must carefully weigh the risks against the benefits. Likewise, potential living liver donors should balance the decision to donate with the advice of family, friends, an independent donor advocate, and the medical/surgical team. As stated by Marsh and colleagues, "no matter how carefully right lobar living donor transplantation is applied, the historical verdict on the ethics of this procedure may be harsh."[14]

Cadaver donor organ availability appears to have reached a plateau despite initiatives directed at increasing organ donation.
If enough cadaver organs could be obtained to meet the needs for liver transplantation, living donor liver transplants would not be necessary. Thus, we all need to expand our efforts to increase donor awareness and promote registration.

Glucosamine for Back Pain?

From Medscape Orthopaedics & Sports Medicine > Viewpoints
Joseph K. Lee, MD

Effect of Glucosamine on Pain-Related Disability in Patients With Chronic Low Back Pain and Degenerative Lumbar Osteoarthritis: A Randomized Controlled Trial
Wilkens P, Scheel IB, Grundnes O, Hellum C, Storheim K
JAMA. 2010;304:45-52

Article Summary
Glucosamine has been routinely recommended help treat peripheral joint osteoarthritis.[1]
Its use has been advocated in chronic low back pain (LBP) conditions, too. However, its effect in patients with LBP has not been well studied. In this double-blinded, randomized, controlled study by Wilkens and colleagues, 250 patients older than 25 years of age with a history of chronic LBP (> 6 months) and degenerative lumbar osteoarthritis were given either 1500 mg of oral glucosamine or placebo for 6 months. Patients were evaluated at the end of the 6 months, and then at 12 months. No statistically significant difference was seen with any of the outcome measures (Roland Morris Disability Questionnaire, LBP at rest, LBP during activity, and EuroQol-5 Dimensions) at 6 or 12 months.

Viewpoint
Studies of glucosamine in patients with osteoarthritis have shown modest-to-no significant benefits.[1-4] According to the findings in this particular study, glucosamine does not appear to show any significant benefit in patients with chronic LBP.
Study limitations included patient screening criteria, which did not exclude those with concomitant leg symptoms, allowance of other adjunct treatments, and variability in patient adherence to the glucosamine treatment.

Because of the complex nature of diagnosing and treating chronic low back pain, it can be quite difficult to isolate the pain generator.
At times, there may be more than one pain generator causing a patient's symptoms. Future studies focused on LBP and osteoarthritis may benefit from identifying patients based on specific clinical signs and symptoms of osteoarthritis, rather than on x-ray evidence alone.

Sunday, September 26, 2010

Clinical Vitamin D Deficiency Linked to Depression in Older Adults

From Medscape Medical News

Megan Brooks

September 23, 2010 — In a British national survey of older adults, clinical vitamin D deficiency, defined as a serum 25-hydroxyvitamin D (25[OH]D) level less than 10 ng/mL, was significantly associated with depressive symptoms, independent of age, sex, social class, physical health status, and season.

Milder states of vitamin D deficiency were not strongly associated with depression in older adults, Robert Stewart, MD, of King's College London, and Vasant Hirani, MSc, of University College London, United Kingdom, report in the September issue of Psychosomatic Medicine.

"Although vitamin D deficiency has been investigated in relationship to mental disorders in younger adults, relatively little research has investigated this association in older people, despite the higher potential impact," the study authors write.

They analyzed data on 2070 adults 65 years and older who participated in the 2005 Health Survey for England. As part of the survey, information on health and health behaviors and sociodemographic data were collected, 25(OH)D levels were measured, and depressive symptoms scored using the Geriatric Depression Scale.

The directly measured vitamin D levels, the use of a widely used scale for depression in older adults, and the large nationally representative sample are key advantages of the study, the study authors say.

Independent Risk Factor

Overall, about one-quarter of the cohort (25.2%) had depressive symptoms, they report. The prevalence of depressive symptoms was 22.6% in the 85.4% of adults with 25(OH)D levels less than 30 ng/mL and 25.8% in the 51.4% of adults with 25(OH)D levels less than 20 ng/mL.

The prevalence of depression was highest (35.0%) in the 9.8% of the cohort with 25(OH)D levels less than 10 ng/mL (clinical deficiency).

The prevalence ratio for depressive symptoms in those with clinical vitamin D deficiency, relative to the remainder of the sample, was 1.45, the team notes, and the population-attributable fraction calculated from this was 4.2%.

In logistic regression analyses, associations between the 3 deficiency states and depression were significant before adjustment for covariates. After adjustment for age, sex, social class, season, smoking status, body mass index, long-standing limiting illness, and subjective general health status, only the association with clinical vitamin D deficiency remained significant and independent (odds ratio, 1.46; 95% confidence interval, 1.02 – 2.08; P = .04).

Further adjustment for alcohol intake and stratification by season of examination did not markedly or consistently alter the results, the researchers note.

Clinical Trial Warranted

Commenting on the study for Medscape Medical News, geriatrician and epidemiologist Luigi Ferrucci, MD, PhD, of the Clinical Research Branch of the National Institute on Aging in Baltimore, Maryland, who was not involved in the study, said there has been "a long series of papers" showing vitamin D deficiency is not only associated with bone problems but with a series of other problems, including low muscle strength, cognitive and vision problems, and depression.

"The studies on depression have been cross-sectional and the data were not very convincing for a number of reasons. The most important one, probably, is that if you have low vitamin D you are likely to have all these pathological problems associated with low vitamin D and these, by themselves, can cause you to be depressed."

Echoing this, Dr. Stewart and Mr. Hirani point out in their paper that "it is possible that depressive states were a cause, rather than a consequence, of vitamin D deficiency." However, if this were the case, they say the association with depression would be expected to be to a similar extent with any relative 25(OH)D deficiency rather than restricted to the 10% lowest levels, as was the case in the current study.

"Ultimately, prospective research is required to clarify the direction of cause and effect," the study authors note. Seconding that view, Dr, Ferrucci said, "An observational study is far from truly establishing causality, but there may be enough evidence at this point to start a clinical trial."

He believes we are reaching a "critical threshold of evidence where it may be worthwhile in investing in a clinical trial to study the effect of vitamin D supplementation on depression risk."

In such a study, Dr. Ferrucci explained, "we'd take people with no depression at baseline, but maybe at risk for depression, treat them with vitamin D or placebo, and what we'd expect to see is that those treated with vitamin D will be less likely to develop a depressive episode in the future; if we could demonstrate that, then the causal pathway is more likely, but we are not at that stage yet."

If vitamin D deficiency is demonstrated to be a cause of depression, correcting the problem "could be an effective public health measure to reduce depression prevalence in later life," Dr. Stewart and Mr. Hirani conclude in their report.

The study authors and Dr. Ferrucci have disclosed no relevant financial relationships.

Psychosom Med. 2010;72:608-612.

Wednesday, September 22, 2010

Pulsed Radiofrequency Helpful for Herniated Disk and Spinal Stenosis

From Medscape Medical News
News Author: Laurie Barclay, MD
CME Author: Désirée Lie, MD, MSEd

May 24, 2007 -- Pulsed radiofrequency is an effective treatment in patients with herniated disk and spinal stenosis but not in those with failed back surgery syndrome (FBSS), according to the results of a retrospective analysis reported in the seventh volume of Pain Practice.

"Radiofrequency (RF) thermolesioning adjacent to the dorsal root ganglion (DRG) has been employed for pain relief in patients with cervicobrachial pain, thoracic radiculopathy, and chronic lumbar radicular pain (LRP)," write David Abejón, MD, FIPP, from Hospital Universitario Clínica Puerta de Hierro in Madrid, Spain, and colleagues.
"Despite its widespread use and well-documented efficacy, this option does not appear to be an ideal modality of treatment for LRP because neurodestructive methods for the treatment of neuropathic pain are in principle generally considered inappropriate.... The purpose of this study was to evaluate the effectiveness of pulsed radiofrequency (PRF) applied to the lumbar dorsal root ganglion."

The investigators performed a retrospective analysis of 54 consecutive patients who underwent 75 pulsed radiofrequency procedures and divided them into 3 groups based on the etiology of the lesion: herniated disk, spinal stenosis, and FBSS.
The analgesic efficacy of the technique was evaluated using a 10-point Numeric Rating Scale (NRS) at baseline and the NRS and Global Perceived Effect (GPE) at 30, 60, 90, and 180 days.
Other outcomes were reduction in medications and number of complications observed.

The NRS and GPE scores decreased in patients with herniated disk ( P < .05) and spinal stenosis ( P < .001) but not in those with FBSS. No complications were observed.

Therapeutic success was defined as a GPE score greater than 5 at 60-day follow-up or a decrease in NRS score of 2 points.
Based on these criteria, the proportion of patients with therapeutic success was 15.3% in FBSS, 66.6% in spinal stenosis, and 72.4% in herniated disk, and the number needed to treat was 6.5, 1.49, and 1.38 patients, respectively.

Study limitations include retrospective design and small sample size.

"We conclude that, in our hands and with our patient population, PRF of DRG yields satisfactory results in patients with HD [herniated disk], and lesser but worthwhile results in patients with SS [spinal stenosis]," the authors write. "PRF of DRG appears to be of no benefit to patients with FBSS."

Pain Pract. 2007;7:21-26.

Lumbar Spinal Stenosis: Syndrome, Diagnostics and Treatment

From Nature Reviews Neurology

Eberhard Siebert, MD; Harald Prüss, MD; Randolf Klingebiel, MD; Vieri Failli, PhD; Karl M. Einhäupl, MD; Jan M. Schwab, MD, PhD

Abstract

Lumbar spinal stenosis (LSS) comprises narrowing of the spinal canal with subsequent neural compression, and is frequently associated with symptoms of neurogenic claudication.
To establish a diagnosis of LSS, clinical history, physical examination results and radiological changes all need to be considered.
Patients who exhibit mild to moderate symptoms of LSS should undergo multimodal conservative treatment, such as patient education, pain medication, delordosing physiotherapy and epidural injections.
In patients with severe symptoms, surgery is indicated if conservative treatment proves ineffective after 3-6 months.
Clinically relevant motor deficits or symptoms of cauda equina syndrome remain absolute indications for surgery.

The first randomized, prospective studies have provided class I-II evidence that supports a more rapid and profound decline of LSS symptoms after decompressive surgery than with conservative therapy.

In the absence of a valid paraclinical diagnostic marker, however, more evidence-based data are needed to identify those patients for whom the benefit of surgery would outweigh the risk of developing complications. In this Review, we briefly survey the underlying pathophysiology and clinical appearance of LSS, and explore the available diagnostic and therapeutic options, with particular emphasis on neuroradiological findings and outcome predictors.

Introduction
The term lumbar spinal stenosis (LSS) refers to the anatomical narrowing of the spinal canal and is associated with a plethora of clinical symptoms.

The annual incidence of LSS is reported to be five cases per 100,000 individuals, which is fourfold higher than the incidence of cervical spinal stenosis.
The characteristic symptom of LSS is neurogenic claudication, which was a term coined by Dejerine (1911) and defined by von Gelderen (1948) and, later, Verbiest (1954).
In his report, von Gelderen described neurogenic claudication as "localized, bony discoligamentous narrowing of the spinal canal that is associated with a complex of clinical signs and symptoms comprising back pain and stress-related symptoms in the legs (claudication)".

This characterization is still in use today. LSS has become the most common indication for lumbar spine surgery, in part because of the increasing quality and availability of radiological imaging.
The increasing frequency of LSS surgery also reflects the elevated demand for mobility and flexibility in the aging population. Propagated by the increasing prevalence of this condition, controlled, evidence-based advice for individual treatment decisions is starting to emerge.

LSS can be classified according to etiology (primary and secondary stenoses) and to anatomy (central, lateral or foraminal stenosis), as summarized in Box 1 .
Primary stenosis is caused by congenital narrowing of the spinal canal, whereas secondary stenosis can result from a wide range of conditions, most often chronic degeneration, which leads to a destabilized vertebral body. Other causes of secondary stenosis include rheumatoid diseases, osteomyelitis, trauma, tumors, and, in rare cases, Cushing disease or iatrogenic cortisone application.


http://cme.medscape.com/viewarticle/704859

Friday, September 17, 2010

NSAID Use Associated With Atrial Fibrillation

From Heartwire

Lisa Nainggolan

September 15, 2010 (Chieti, Italy) — The use of nonsteroidal anti-inflammatory drugs (NSAIDs) is associated with an increased risk of chronic atrial fibrillation (AF), a new study suggests [1]. However, the researchers do not believe the drugs are causing AF; rather, they suggest that the underlying inflammation necessitating the NSAID therapy might be the culprit.

Dr Raffaele De Caterina (G d'Annunzio University, Chieti, Italy) and colleagues found a statistically significant 44% increase in the risk of chronic AF, but no association with paroxysmal AF, in users of NSAIDs. They also confirmed previous findings regarding the association of steroids with AF, with those taking steroids two and a half times more likely to develop chronic AF than those not taking them, they report in their paper in the September 13, 2010 issue of the Archives of Internal Medicine.

"A likely explanation for our findings is the existence of an underlying inflammatory condition, increasing the risk of AF on the one hand and prompting the use of NSAIDs on the other," they say. Future research should ideally include a description of left ventricular function, atrial size and/or function, and inflammatory markers, which would help make the association "more biologically plausible," they add.

A Hypothesis That Deserves Further Investigation

De Caterina et al identified patients aged 40 to 89 years with a first-ever diagnosis of AF in 1996 in the UK General Practice Research Database and classified them as having paroxysmal or chronic AF. After validation with their primary-care physicians, 1035 patients were confirmed as having incident chronic AF and 525 as having paroxysmal AF. Two separate nested case-control analyses estimated the risk of first-time chronic and paroxysmal AF among users of steroids and NSAIDS.

Increased risk of chronic AF with NSAID use was present irrespective of treatment duration, although the researchers did find an even greater risk associated with long-term use (RR 1.80 in those who used NSAIDs for more than one year). But there was no apparent dose-response relationship when they divided daily use into low, medium, and high.

The findings cannot be explained by the occurrence of heart failure, either, say De Caterina and colleagues, because further stratification found the increased risk of AF associated with NSAIDs was absent in those with prior HF (but present in those without HF).

They go on to explain that the most frequent pathoanatomical changes in AF are atrial fibrosis and loss of atrial muscle mass, and although it is difficult to distinguish between changes due to AF and those due to associated heart disease, fibrosis may precede the onset of AF and may be caused by inflammation.

"It is possible, and we would like to propose, that conditions presenting systemic inflammation, such as autoimmune and rheumatic disorders, represent an independent risk factor for atrial fibrosis and subsequently for an increased risk of onset or persistence of AF. We believe this hypothesis deserves further investigation," they conclude.

Data collection for the determination of AF cases was performed with an unrestricted research grant to Centro Español de Investigación Farmacoepidemiológica from AstraZeneca. The authors report no disclosures.

ReferencesDe Caterina R, Ruigómez A, and Rodriguez LAG. Long-term use of anti-inflammatory drugs and risk of atrial fibrillation. Arch Intern Med 2010; 170:1450-1455. Abstract

Print This Email this Share
Facebook Twitter
More on This Topic

Glucosamine and/or Chondroitin May Not Be Helpful for Osteoarthritis

From Medscape Medical News
Laurie Barclay, MD

September 16, 2010 — Glucosamine and/or chondroitin may not be helpful for patients with osteoarthritis of the hip or knee, according to the results of a network meta-analysis reported in the September 17 issue of the BMJ.

"Osteoarthritis of the hip or knee is a chronic condition mostly treated with analgesics and non-steroidal anti-inflammatory drugs, but these drugs can cause serious gastrointestinal and cardiovascular adverse events, especially with long term use," write Simon Wandel, from the University of Bern in Bern, Switzerland, and colleagues. "Disease modifying agents that not only reduce joint pain but also slow the progression of the condition would be desirable. Throughout the world for the past 10 years, the cartilage constituents chondroitin and glucosamine have been increasingly recommended in guidelines, prescribed by general practitioners and rheumatologists, and used by patients as over the counter medications to modify the clinical and radiological course of the condition."

The goal of the study was to assess the impact of supplementation with glucosamine and/or chondroitin on joint pain and on radiologic progression in patients with osteoarthritis of the hip or knee. Using a Bayesian model allowing synthesis of multiple time points, the investigators combined direct comparisons within trials with indirect evidence from other trials.
The primary study endpoint was pain intensity, and change in minimal width of the joint space was the secondary endpoint. When a 10-cm visual analog scale was used, the prespecified, minimal clinically important difference between preparations and placebo was −0.9 cm.

The investigators searched electronic databases and conference proceedings from their beginnings to June 2009, and they also contacted appropriate experts and searched relevant Web sites. Inclusion criteria were large-scale, randomized controlled trials enrolling more than 200 patients with knee or hip osteoarthritis and comparing glucosamine, chondroitin, or their combination vs placebo or head to head.

Ten trials meeting eligibility criteria were identified, enrolling a total of 3803 patients. The overall difference in pain intensity vs placebo was −0.4 cm (95% credible interval, −0.7 to −0.1 cm) on the 10-cm visual analog scale for glucosamine, −0.3 cm (95% credible interval, −0.7 to 0.0 cm) for chondroitin, and −0.5 cm (95% credible interval, −0.9 to 0.0 cm) for the combination. The 95% credible intervals crossed the boundary of the prespecified, minimal clinically important difference for none of the estimates. Compared with commercially funded trials, industry-independent trials showed smaller effects (P = .02 for interaction).

For changes in minimal width of joint space, the differences were all very small, with 95% credible intervals overlapping zero.

"Compared with placebo, glucosamine, chondroitin, and their combination do not reduce joint pain or have an impact on narrowing of joint space," the study authors write.
"Health authorities and health insurers should not cover the costs of these preparations, and new prescriptions to patients who have not received treatment should be discouraged."

Study Limitations

Limitations of this study include use of different instruments to measure joint pain, which made it necessary to calculate effect sizes as a common measure of effectiveness so that outcomes assessed with different instruments would be comparable. Differences in responsiveness of different instruments could potentially threaten the validity of results. In addition, many patients included in the trials may have had radiologic disease too severe to benefit from supplementation or pain too minimal to benefit from analgesic effects.

Conclusion

"Our findings indicate that glucosamine, chondroitin, and their combination do not result in a relevant reduction of joint pain nor affect joint space narrowing compared with placebo," the study authors note. "Some patients, however, are convinced that these preparations are beneficial, which might be because of the natural course of osteoarthritis, regression to the mean, or the placebo effect."

"We are confident that neither of the preparations is dangerous," the study authors conclude. "Therefore, we see no harm in having patients continue these preparations as long as they perceive a benefit and cover the costs of treatment themselves."

The Swiss National Science Foundation's National Research Program 53 on musculoskeletal health supported this study. Some of the study authors were supported by the Swiss National Science Foundation and/or the Janggen-Poehn-Foundation. The other study authors have disclosed no relevant financial relationships.

BMJ. 2010;341:c4675.

One-Time PSA Test at 60 Instead of Routine Screening?

From Medscape Medical News
Zosia Chustecka

September 16, 2010 — New data suggest that a 1-time prostate-specific antigen (PSA) test at age 60 can pinpoint men who are likely to die from prostate cancer.

The results, published online September 14 in BMJ, come from a Swedish study with a 25-year follow-up.

The finding "needs to be validated in additional studies," according to an accompanying editorial. The researchers agree there is a need for replication by an independent team; nevertheless, they are enthusiastic about their results.

"This is a key finding," said lead research Andrew Vickers, PhD, from the Department of Epidemiology and Biostatistics at the Memorial Sloan-Kettering Cancer Center in New York City.

"We know that screening detects many prostate cancers that are not harmful, leading to anxiety and unnecessary treatment," he said in a statement. Indeed, a separate study published online September 14 in BMJ found no support for routine PSA screening in all men.

The approach the study authors propose — of testing once at age 60 — pinpoints men who are at increased risk for "really aggressive cancers, the sort likely to lead to symptoms or shorten a man's life," Dr. Vickers said. His team found that most of the deaths from prostate cancer were among the 25% or so of men who had, at age 60, PSA levels higher than 2 ng/mL.

The team originally started their study with the hope of finding a new biomarker for prostate cancer. "What we found instead was a new way of using an old test," Dr. Vickers said.

New Way of Thinking About the PSA Test

In an interview with Medscape Medical News, Dr. Vickers suggested that the finding provided "a new way of thinking about the PSA test that offers clear recommendations for clinical practice."

"We were surprised by just how strong the associations were," he said.

Instead of routine PSA screening for all men, which has led to overdiagnosis and overtreatment, this study suggests that repeat screening can be confined to the 25% or so of men whose PSA level is above 2 ng/mL at age 60.

It also suggests that the 50% or so of men with PSA levels below 1 ng/mL at age 60 can be left alone, and need not have any further PSA screening. "The harms of further screening will probably outweigh the benefits in this group," he said.

This conclusion about discontinuing screening in men with low baseline PSA levels echoes the conclusion of another study published this week.

"We haven't totally solved the problem of overdiagnosis," Dr. Vickers explained. Many patients who have a higher than average PSA at age 60 will develop prostate cancers that are unlikely to lead to death. "Nonetheless, it is clear that risk-stratifying screening will reduce overdiagnosis in men at low risk of prostate cancer death and will improve compliance with screening in those men with most to benefit," he added.

It's certainly thought-provoking.
"It's certainly thought-provoking," was the reaction of Brantley Thrasher, MD, FACS, professor of urology at the University of Kansas in Kansas City, who acts as a spokesperson for the American Urological Association.

However, he cautioned against relying too much on a single 1-off measurement of PSA, because it represents just a "snapshot in time."

PSA is a "continuous variable," and it is important to have a number of data points, he said. "Another concern I have is that there is no PSA below which you can tell a man that he doesn't have cancer," he added.

Dr. Thrasher said he could not agree with the proposal that a man could be told not to worry about prostate cancer ever again on the basis of just 1 test, but he could foresee extending the time period between checks — for example, from having a PSA test yearly to having it every 5 years.

This view was echoed by Andrew Wolf, MD, associate professor of medicine at the University of Virginia School of Medicine in Charlottesville, who was also approached for independent comment by Medscape Medical News. "I don't think you can make a dichotomous decision to continue to screen or not on the basis of 1 test," he said. "In particular, I don't think you can leap to the conclusion that you are good for life if your level is below 1 ng/mL ."

"It would be premature to change our practice on the basis of these findings," he added. But the study is "intriguing" and it does add to the literature. It also adds fuel to the ongoing discussions about extending intervals between PSA tests, he said. Initially, in the United States, this was seen as an annual test, but there is a move toward longer intervals now, especially in low-risk men. The American Cancer Society recently recommended testing every 2 years for men with a PSA value below 2.5 ng/mL, he noted.

One-Time Test Predicts Mortality

The study involved reanalyzing blood samples that had been collected more than 25 years previously for the Malmö Preventive Project. Originally, these blood samples were collected from 60-year-old Swedish men for cardiovascular studies. But Dr. Vickers and colleagues, including senior author Hans Lilja, MD, PhD, also from Sloan-Kettering, analyzed the stored blood samples for PSA.

They collected this information for 1167 men.

Then they scoured the Swedish Cancer Registry for details of men who had been diagnosed with prostate cancer (n = 126), and identified 43 men who developed prostate cancer metastases and 35 who died from the disease.

Conditional logistic regression analysis showed that it was the men with the highest levels of PSA in their blood at age 60 who were most likely to die from prostate cancer.

"As an example, men with a [PSA] concentration ≥ 2 ng/mL at age 60, have, on average, 26 times the odds of dying from prostate cancer than men with a concentration <2 ng/mL," the researchers write.

It was rare to find prostate cancer metastases or death from prostate cancer among men who had a PSA concentration below 1 ng/mL at age 60, the researchers note, but the risk rose rapidly as the concentrations increased.

Risk Stratification

Men aged 60 with a PSA concentration below 1 ng/mL (about half of the men in this study) should be considered at low risk for prostate cancer death and might not need to be screened in the future, the researchers suggest. They might go on to develop prostate cancer, but "even if they do harbor cancer, it is unlikely to become apparent during their lifetime and even less likely to become life-threatening," they add.

In contrast, men with a PSA concentration above 2 ng/mL (about 25% of men in this study) should be considered at increased risk for aggressive prostate cancer and should continue to be screened regularly, they conclude.

But the raised PSA level is "far from being an inevitable harbinger of advanced prostate cancer," they point out. Even among the highest levels of PSA (5.2 ng/mL), only 1 in 6 men will die of prostate cancer by age 85.

Limiting screening to a 1-time PSA test is "likely to shift the ratio of harms to benefits," the researchers note. They argue that it would also "lead to increased acceptance of screening among patients."

In addition, it could increase the uptake of chemoprevention with drugs like finasteride, they suggest. Currently, few men take up this option, but they might be more willing to do so if they were identified as being at high risk.

The researchers wonder whether these results can be replicated by an independent group, and whether the risk stratification would be similar in other populations. This study involved white Swedish men, but the incidence of prostate cancer is lower in Asian and higher in African American people than in white people.

This point was also raised by Dr. Wolf, who pointed out that the study was conducted in one town in Sweden, where the men are likely to be genetically similar, and that they were all 60 years old. Hence, these findings cannot be extrapolated to other populations or other age groups, he cautioned.

Another prostate cancer researcher, Lars Holmberg PhD, MD, from the Division of Cancer Studies at King's College Medical School in London, United Kingdom, said the study is "well done by a very competent group and on good quality data."

The strategy . . . may diminish testing and anxiety and unnecessary diagnoses.
"Everything that can be done to help use PSA testing in a more rational way, minimizing the side effects of testing on wide indications, is worthwhile," Dr. Holmberg told Medscape Medical News.

"The strategy they propose may diminish testing and anxiety and unnecessary diagnoses," he said. However, "it is unclear how much their proposed limitation of PSA use would really affect the major (and very serious!) problem with screening — overdiagnosis."

The study was funded by grants from the National Cancer Institute, the Swedish Cancer Society, the Swedish Research Council, and several other foundations. Dr. Lilja reports holding a patent for free PSA and hK2 assays.

BMJ. Published online September 14, 2010.

Zosia Chustecka has disclosed no relevant financial relationships.

Carotid Stenting Should Be Avoided in Patients Older Than 70 Years

From Medscape Medical News

Fran Lowry

September 17, 2010 — Stenting for symptomatic carotid stenosis in patients 70 years and older should be avoided, but the procedure might be as safe as endarterectomy in younger patients, according to results of a prospectively defined meta-analysis published Online First September 10 in The Lancet.

"Results from randomised controlled trials have shown a higher short-term risk of stroke associated with carotid stenting than with carotid endarterectomy for the treatment of symptomatic carotid stenosis," write Martin M. Brown, MD, National Hospital for Neurology and Neurosurgery and University College London, London, United Kingdom, and colleagues from the Carotid Stenting Trialists' Collaboration. "However, these trials were underpowered for investigation of whether carotid artery stenting might be a safe alternative to endarterectomy in specific patient subgroups."

The investigators conducted a preplanned meta-analysis of individual patient data from 3 randomized controlled trials: the Endarterectomy versus Angioplasty in Patients with Symptomatic Severe Carotid Stenosis (EVA-3S) trial, the Stent-Protected Angioplasty versus Carotid Endarterectomy (SPACE) trial, and the International Carotid Stenting Study (ICSS).

The data from the trials, comprising 3433 patients, were pooled and analyzed, with any stroke or death as the primary outcome.

In the first 120 days after randomization, any stroke or death was 53% more likely to occur in the carotid stenting group vs the carotid endarterectomy group, occurring in 8.9% (153/1725 patients) randomly assigned to stenting vs 5.8% (99/1708 patients) randomly assigned to carotid endarterectomy (risk ratio [RR], 1.53; 95% confidence interval [CI], 1.20 - 1.95; P = .0006.)

Of all subgroup variables assessed in the meta-analysis, only age significantly modified the treatment effect. In patients younger than 70 years, the estimated 120-day risk for stroke or death was 5.8% in the carotid stenting group and 5.7% in the carotid endarterectomy group (RR, 1.00; 95% CI, 0.68 - 1.47).

However, in patients 70 years or older, the estimated risk for stroke or death was twice that with carotid endarterectomy: 12.0% vs 5.9% (RR, 2.04; 95% CI, 1.48 - 2.82).

The study also found risk estimates for stroke or death within the first 30 days of treatment to be similar in patients 70 years and younger for both groups (5.1% for stenting vs 4.5% for carotid endarterectomy). However, for patients older than 70 years, the estimates were 10.5% in the stenting group vs 4.4% in the endarterectomy group (RR, 2.41; 95% CI, 1.65 - 3.51).

The findings suggest that stenting might be a viable alternative to endarterectomy in younger patients, although there is some uncertainty about the potentially higher rate of recurrent stenosis after stenting vs endarterectomy and the implications this might have for future stroke risk in these patients, the study authors comment.

"With these caveats in mind, an approach of offering stenting when technically feasible as an alternative option to endarterectomy to patients younger than 65-70 years with symptomatic carotid stenosis, in centres in which acceptable periprocedural outcomes have been independently verified, might seem justified, as long as patients are made aware of a possible increase in the risk of restenosis," they write.

Limitations of the meta-analysis include insufficient statistical power to compare treatment risks in some subgroups, such as women; patients presenting with ocular ischemia; and patients with severe contralateral carotid disease. Also, the number of procedures undertaken by each surgeon and interventionalist before they joined the trials is unknown, as is the effect of individual experience on complication rates.

The study authors conclude that the prospectively defined meta-analysis of EVA-3S, SPACE, and ICSS that showed a highly significant, age-dependent variation of risks from stenting has implications for clinical practice. "There is strong evidence that, in the short term, the harm of stenting compared with endarterectomy decreases with younger age," they write.

The study was funded by The Stroke Association. The study authors have disclosed no relevant financial relationships.

Lancet. Published online September 10, 2010.

Sunday, September 12, 2010

AHA Releases Scientific Statement on Poststroke Care

From Medscape Medical News
Pauline Anderson

September 10, 2010 —The American Heart Association (AHA) has released a comprehensive scientific statement on nursing and interdisciplinary rehabilitation care for stroke patients.
The statement summarizes the best available evidence for poststroke therapies and includes recommendations for management of stroke survivors and their families during inpatient, outpatient, chronic care, and end-of-life settings.

"It's like a large dictionary, but people can find kernels of very useful information in terms of recovery of a stroke patient," said Ralph Sacco, MD, a stroke neurologist and president of the AHA.

Dr. Sacco emphasized the comprehensive nature of the AHA statement. "Nobody has really sat down and tried to tackle all of the evidence that's out there in both the evaluation and the rehabilitation of people with deficits related to stroke," he told Medscape Medical News. "The statement includes 602 references, which is tremendous."

The statement was published online September 2 and will appear in the October issue of Stroke.

The guidelines should help educate nurses and other members of the interdisciplinary team about the potential for recovery in the more chronic phases of stroke care. Healthcare professionals are often unaware of patients' potential for improvement during this later phase, said the authors.

"What this document is saying is that there is evidence that stroke survivors continue to improve," said Dr. Sacco. "What I tell my patients is that language and speech can recover for years after a stroke, while motor recovery is often something people think about in the early phases."

Reframing Complexities

The report represents "an initial effort to reframe the complexities of interdisciplinary, postacute care of stroke survivors into a format that optimizes the potential for the highest achievable outcomes and quality care," the authors write.

The writing group, chaired by Elaine L. Miller, PhD, RN, from the University of Cincinnati College of Nursing in Ohio, prepared the statement on behalf of the AHA Council on Cardiovascular Nursing and the Stroke Council.

According to background information in the statement, stroke continues to represent the leading cause of long-term disability. About 50 million stroke survivors worldwide currently cope with significant physical, cognitive, and emotional deficits, and 25% to 74% of these survivors require some assistance.

WHO Classifications

The report includes the World Health Organization (WHO) international classification of functioning, disability, and health. This model of disease is being used to provide a common framework to deliver and study the efficacy of rehabilitation outcomes across settings, said the authors.

The statement outlines the roles of key members of the interdisciplinary team and emphasizes the diverse skills necessary for poststroke rehabilitation. "It makes it clear that it's not just physicians: it's physicians, it's nurses, it's speech therapy, it's cognitive therapy, it's occupational therapy," said Dr. Sacco.

Treating care settings in a continuum, as the authors have done, is a novel approach, said Dr. Sacco. "They broke the evidence down into care in the inpatient rehabilitation setting, care in the outpatient setting, and care in the chronic care setting and then supported it across all the types of rehabilitation focus areas."

The statement provides an overview of the evidence for various screening tests and medical treatments, including traditional rehabilitation therapies and newer techniques, such as robot-assisted therapies, electrical stimulation, and weight-assisted treadmill devices.

Support for Caregivers

It also discusses approaches to personal and environmental factors and education and support for caregivers. One of the recommendations is that caregivers should be active members of the interdisciplinary team with common shared goals for recovery and community reintegration.

Dr. Sacco also noted the attention paid to "underemphasized" areas such as depression in the poststroke phase and communication and cognition. "It really does make us recognize the need for multidimensional rehabilitation of stroke patients."

Once published, the next step is to determine how best to distribute the statement. AHA councils will develop an implementation plan to disseminate the information "to get it into the right hands," said Dr. Sacco.

"It's one thing to write a great comprehensive document; it's another to determine what measurements to use to see how we're doing and how we can then try to improve performance and adherence to these guidelines."

Recommended Reference

Approached for a comment, Howard S. Kirshner, MD, professor and vice chair in the Department of Neurology at Vanderbilt University Medical Center, Nashville, Tennessee, and a member of the American Academy of Neurology, agreed the review is "comprehensive" and said he would recommend it as a reference source for medical and allied health professionals.

"The interested physician, nurse, therapist, or family member can find a summary of virtually all treatments given for the rehabilitation of stroke, along with the level of evidence for benefit," Dr. Kirschner wrote in an email to Medscape Medical News. "What is new about this overview is the context not only of the patient's medical and functional limitations, but also the patient's social environment, access to health care, and family support."

However, Dr. Kirshner found some of the emphasis on meta-analyses rather than individual studies "problematic." One example, he said, is the evidence for antidepressant therapy of poststroke depression "The result is a mixed statement of benefit, whereas a look at individual studies would bring up the stronger evidence in the better organized and controlled studies."

Time Factor

He noted that the meta-analyses did not include the factor of time. "Studies done in the first weeks after stroke tend to find a close correlation between depression and left hemisphere, especially anterior lesions, whereas studies performed several months after onset tend to find equal incidence of depression in patients with right and left hemisphere strokes."

He felt there were other omissions, too. For example, the discussion of fall risk omits emphasis on cognitive issues such as confusion and dementia, which could greatly increase such risk, said Dr. Kirshner. In the discussion of electrical stimulation, there is no mention of external stimulators built into splint devices, and discussion of treatments for spasticity did not mention oral baclofen, he noted.

But what he missed perhaps the most was an appeal to a stroke patient's "bill of rights" in terms of what the healthcare system should ideally provide to all stroke survivors and their families, he said.

Although he would recommend it as a reference, Dr. Kirshner acknowledged that few healthcare professionals would read the entire report. "Its comprehensive nature and summary of evidence make it not the easiest article to read."

Disclosures for all authors appear in the original report.

Stroke. Published online September 2, 2010.

Thursday, September 9, 2010

About Face: Oral Bisphosphonates Linked to Esophageal Cancer

From Medscape Medical News
Nick Mulcahy

September 3, 2010 — In an about face of findings, a new study reports that oral bisphosphonates — widely prescribed for osteoporosis — are associated with an increased risk for esophageal cancer. The study comes just weeks after another study, using the same database, found no such link.

The new case–control study found that the risk was statistically significant among patients who had 10 or more prescriptions for the drugs — compared with patients who had 1 to 9 prescriptions.

The relative risk also increases with duration of use; it doubles over about a 5-year period, according to the authors of the new study, led by Jane Green, PhD, a clinical epidemiologist from the Cancer Epidemiology Unit at the University of Oxford in the United Kingdom.

However, an expert in gastroesophageal cancer contacted by Medscape Medical News suggested that clinicians should not stop prescribing oral bisphosphonates.

Clinicians need to weigh the benefits of oral bisphosphonates such as the reduction in hip fracture against the risks such as the increase in esophageal cancer, said Gerard Blobe, MD, associate professor of medicine at Duke University School of Medicine in Durham, North Carolina.

"The absolute risk increase found in the study is still pretty small — 1 in 1000 to 2 in 1000 with about 5 years of use of oral bisphosphonates," he observed about the new data.

"I would say that, generally, the benefits outweigh the risks," he said.

Dr. Blobe also explained that oral bisphosphonates could theoretically cause esophageal cancer because "irritation" of the esophagus might "set the stage for cancer in some patients."

The new study's findings on oral bisphosphonates conflict with a recent British study that found no link.

Notably, both studies used the UK General Practice Research Database.

"What would cause these differences [in findings]?" asks an editorial that accompanies the new study, both of which were published online September 2 in BMJ.

A "major difference" in the 2 studies is the average length of follow-up, notes the editorialist, Diane Wysowski, PhD, an epidemiologist at the US Food and Drug Administration (FDA) in Silver Spring, Maryland. The earlier negative study had 4.5 years of follow-up; the new positive study has 7.5 years.

The positive study also had "an adequate sample size," said Dr. Wysowski.

Dr. Green and her coauthors said that their positive study had "greater statistical power" than the negative study — with 5 matched controls per case, compared with equal numbers in the exposed and comparison groups in negative study.

Even if oral bisphosphonates cause cancer, "the incidence of esophageal cancer in this population of users would be expected to remain relatively low," writes Dr. Wysowski.

Despite this comment, another gastroenterologist from Duke said that esophageal cancer is of growing concern in the United States and other industrialized countries.

"The incidence of esophageal adenocarcinoma is rising more rapidly than any other malignancy in the past 5 years in Western countries. The reasons are unknown," said Ivy Altomare, MD, assistant professor of medicine and a colleague of Dr. Blobe.

Advice to Patients

The immediate question for clinicians is what to tell patients, says Dr. Wysowski. Her primary recommendation is to tell patients to take the pills correctly. Dr. Blobe explained the reason for the recommendation.

As soon as it was recognized that oral bisphosphonates were associated with esophageal problems, the package inserts for these drugs were changed, he said. The change included directions on how to best take the drugs, and dictated that patients with a history of esophageal problems not be prescribed these agents.

After the package insert was changed, "the incidence of esophageal-related problems with the drugs subsequently dropped," he said. This chain of events suggests that noncompliant patients might be the ones who develop problems, he said.

Dr. Wysowski reminds clinicians to reinforce directions for drug usage with each prescription. Namely, remind patients to take oral bisphosphonates in the morning with a full glass of plain water on an empty stomach — at least 30 to 60 minutes before the first food, beverage, or medication.

Also tell patients not to recline for at least 30 to 60 minutes after taking an oral bisphosphonate.

The new study results might be, in an odd way, useful to clinicians, suggested Dr. Blobe. "This result could incentivize patients to take the pill correctly," he said.

Study Results

The 2 studies come out about a year and a half after the FDA reported 23 cases of esophageal cancer between 1995 and 2008 in patients using alendronate and another 31 cases in patients using a variety of bisphosphonates in Europe and Japan.

Dr. Wysowski was the author of that FDA report, which was published in January 2009 (N Engl J Med. 2009;360:89-90). That paper immediately prompted a couple of quick studies. However, those 2 studies had methods and data that were "sparse", notes Dr. Wysowski in her current editorial.

So that leaves the new positive study and the earlier negative study as the main objects of attention in the debate on this matter.

The 2 sets of investigators studied the problem in slightly different ways, points out Dr. Wysowski.

The negative study compared the incidence of esophageal cancer and gastric cancer in patients who were exposed or not exposed to oral bisphosphonates, and found no increase in the risk for either cancer. The positive study compared the frequency of oral bisphosphonate exposure in cases and matched noncases.

In their positive study, Dr. Green and her colleagues set out to test the hypothesis that risk for esophageal, but not gastric or colorectal, cancer is higher in users of oral bisphosphonates.

They used a UK General Practice Research Database cohort and identified men and women 40 years or older who were diagnosed from 1995 to 2005 — 2954 with esophageal cancer, 2018 with gastric cancer, and 10,641 with colorectal cancer. Five controls per case were matched for age, sex, general practice, and observation time.

After adjustment for smoking, alcohol, and body mass index, they examined the relative risks for the 3 different cancers.

They found that the incidence of esophageal cancer was higher in cases with 1 or more previous prescriptions for oral bisphosphonates than in those with no such prescriptions (relative risk [RR], 1.30; 95% confidence interval [CI], 1.02 - 1.66; P = .02).

As noted above, the risk for esophageal cancer was significantly higher after 10 or more prescriptions (RR, 1.93; 95% CI, 1.37 - 2.70) than after 1 to 9 (RR, 0.93; 95% CI, 0.66 - 1.31; P for heterogeneity = .002), and for use for more than 3 years (RR vs no prescription, 2.24; 95% CI, 1.47 - 3.43).

This doubling of risk in longer-term users (ie, over 3 years; the average length of use was about 5 years) is where most of the risk is with bisphosphonates, according to the data. The relative risks for people who used oral bisphosphonates for less than a year or for 1 to 3 years were about 1 for both — in other words, they had no increased risk.

Cancers of the stomach and colorectum were not associated with oral bisphosphonates, the authors reported.

The authors have disclosed no relevant financial relationships.

BMJ. 2010; 341:c4444, c4506.

Obesity as a Risk Factor for Nosocomial Infections in Trauma Patients

Serrano PE, Khuder SA, Fath JJ
J Am Coll Surg. 2010;211:61-67

Study Summary

What is the role of obesity as an underlying cause of sepsis in trauma patients? Serrano and colleagues conducted a retrospective study of 1024 trauma patients treated at a single level I trauma center, examining different levels of obesity in relation to nosocomial, pulmonary, and wound infections after adjustment for age, sex, comorbidities, and injury severity.
Overall, obesity caused a 4.7-fold increased risk for infection after adjustment for other factors (P = .03). The increased risk associated with obesity was observed in all categories of infection.

Viewpoint
This study points out that obesity is an additional, powerful factor leading to increased infection rates after injury, even after adjustment for other comorbidities. The underlying mechanisms linking obesity to infection need to be clarified. With increasing scrutiny of hospital and surgeon-specific infection rates by outside agencies, the underlying importance of patient factors, such as obesity, is important to recognize.

Abstract

Tuesday, September 7, 2010

Treatments, Trends, and Outcomes of Acute Myocardial Infarction and Percutaneous Coronary Intervention

From Journal of the American College of Cardiology

Matthew T. Roe et al

Abstract and Introduction
Introduction
Coronary artery disease remains a major public health problem in the U.S. as many Americans experience an acute myocardial infarction (MI) and/or undergo percutaneous coronary intervention (PCI) each year. Given the attendant risks of mortality and morbidity, acute MI remains a principal focus of cardiovascular therapeutics. Moreover, 30-day mortality and rehospitalization rates for acute MI are publicly reported in an effort to promote optimal acute MI care, and aspects of MI care delivery are the focus of local, regional, and national quality initiatives.[1–3] PCI remains a central therapy for patients with symptomatic coronary artery disease, particularly among patients with acute MI, and has garnered tremendous attention in the last decade with issues such as the risks and benefits of drug-eluting stents (DES) and adjunctive antithrombotic therapies.

However, there are few representative data describing contemporary patterns of care and outcomes trends for patients with acute MI and/or those undergoing PCI. This is of particular importance because the process of updating clinical practice guidelines and quality metrics for acute MI and PCI has accelerated.[4] Updates or revisions to the American College of Cardiology (ACC)/American Heart Association (AHA) practice guidelines for PCI, ST-segment elevation myocardial infarction (STEMI), and unstable angina (UA)/non–ST-segment elevation myocardial infarction (NSTEMI) have been published within the last 3 years, building upon prior versions published earlier in the decade.[5–7] The ACC and AHA have also published performance measures to direct quality assessment and improvement activities.[8] However, data are lacking on current guideline adherence as well as on trends in the quality of care and outcomes for the large population of patients in the U.S. with symptomatic coronary artery disease.

Large-scale, national clinical registries provide an important opportunity to evaluate current clinical practice. The American College of Cardiology's National Cardiovascular Data Registry (NCDR) comprises a suite of programs involving >2,400 hospitals in the U.S. (www.ncdr.com). We analyzed the NCDR Acute Coronary Treatment and Intervention Outcomes Network (ACTION) Registry–Get With The Guidelines (AR-G) and Catheterization PCI (CathPCI) databases to characterize recent trends in treatment and outcomes among patients with acute MI and those undergoing PCI. More specifically, we sought to evaluate patient and hospital characteristics, rates of guideline adherence, procedural details, and in-hospital outcomes related to acute MI and PCI care over the last several years.

for rest of article: see

http://www.medscape.com/viewarticle/726014

'Superglue' for Circumcision Cost Effective and Cosmetically Appealing

From Reuters Health Information
Karla Gale, MS

September 3, 2010 — A sutureless and scalpel-free circumcision technique saves time and money, and the outcomes look good too, investigators report.

Dr. Andrew J. Kirsch and associates at Emory University School of Medicine, Atlanta, have done circumcisions using electrocautery and 2-octyl cyanoacrylate (2-OCA) instead of scalpels and sutures for the last four years.

In an August 24th online paper in the Journal of Urology, they report on 1008 boys ages 6 months to 12 years (mean 1.8 years) who had either primary circumcision or circumcision revision between 2006 and 2009.

The authors did all the procedures with electrocautery. They used 2-OCA in 493 primary procedures and 248 revisions, and 6-0 sutures in 152 primary circumcisions and 115 revisions.

The trial was not randomized, the authors note; the main reason for suture use was a resident's request for suturing experience.

The most complex condition treated was phimosis; there were no cases of chordee, penile torsion, hypospadias or phalloplasty. They did not use Gomco clamps.

After removing the foreskin, the surgeons would pinch the shaft skin at the base of the penis, push it distally toward the preputial collar, then apply a thin layer of 2-OCA dorsal and intermittently in circumferential fashion. After 30 seconds' drying time, they applied 2-OCA continuously around the apposed skin edges. They spread antibiotic ointment over the entire penis, scrotum and mons pubis (partly to keep the 2-OCA from sticking) and told parents to do the same when changing diapers or at least twice a day.

Three patients treated with 2-OCA and two treated with sutures reentered the hospital with bleeding, but none needed blood transfusion. Within a year and a half, one patient in the suture group underwent a revision because the parents didn't like the way the first procedure looked, and one patient in the 2-OCA group was treated for adhesions. There were no cases of dehiscence due to erection.

Operative times averaged 8 minutes using 2-OCA and 27 minutes using sutures (p < 0.001).

The authors note that at their institution, the anesthesia fee is about $189 per 15-minute block, and the operating room fee for circumcision is $571 for each 15 minutes. When factoring in the cost of 2-OCA and sutures, the 2-OCA technique cost $744 less than the suture technique.

"Patient satisfaction was equally high in all groups," the authors report, "but surgeon satisfaction was higher in the 2-OCA groups due to absent suture tracks and suture sinuses."

They emphasize the importance when using 2-OCA of determining how much preputial skin to excise to achieve the ideal skin fit, characterized by tension-free approximation of the shaft skin with the preputial collar even before applying 2-OCA.

In an editorial comment, Dr. Mark R. Zaontz, from Temple University School of Medicine, Philadelphia, says that his practice has also adopted the sutureless, scalpel-free technique, even though "running suture... is as rapid in my hands as applying the glue."

However, his group places holding sutures at the 6 and 12 o'clock positions to better line up wound edges. He cautions that this technique may not be as appropriate for older boys. Also, he is concerned about glue getting between the skin edges and the risk of epidermoid cyst formation.

In response, Dr. Kirsch and his group agree that this technique is probably best reserved for prepubescent boys "given the decreased vascularity and force of erection."

J Urol. 2010;184:1758-1762.

Vitamin D Deficiency Linked to Autoimmune Diseases

From WebMD Health News

Salynn Boyles

August 26, 2010 — There is now biologic evidence to back up the belief that vitamin D may protect against autoimmune diseases and certain cancers.

A new genetic analysis lends support to the idea that the vitamin interacts with genes specific for colorectal cancer, multiple sclerosis, type 1 diabetes, and other diseases, says Oxford University genetic researcher Sreeram Ramagopalan.

The study is published in Genome Research.

When Ramagopalan and colleagues analyzed the binding of vitamin D receptors to gene regions previously identified with different diseases, they found evidence of increased binding for multiple sclerosis, Crohn's disease, lupus, rheumatoid arthritis, colorectal cancer, and chronic lymphocytic leukemia (CLL).

"Genes involved in autoimmune disease and cancer were regulated by vitamin D," Ramagopalan tells WebMD. "The next step is understanding how this interaction could lead to disease."

Role of Vitamin D Supplementation

The role of vitamin D supplementation in preventing these diseases is also not well understood.

Exposure to sunlight is an efficient way to raise blood levels of vitamin D hormone, and food sources of the nutrient include oily fish like salmon, fortified milk, and other fortified foods.

But most people would have a hard time getting the vitamin D they need from food, and the increased use of sunscreen has reduced sun exposures.

By one recent estimate, as many as half of adults and children in the U.S. were deficient in the vitamin.

Current recommended daily vitamin D intake is 200 IU (international units) for those up to age 50; 400 IU for people 51 to70; and 600 IU for those over 70. Most experts say that these doses are too low.

Many experts, including Ramagopalan, say 2,000 IU of the vitamin may be optimal for preventing disease.

Blood levels of the vitamin are measured as 25-hydroxyvitamin D. Levels below 20 nanograms per milliliter are generally considered deficient.

Harvard School of Public Health nutrition researcher Edward Giovannucci, MD, says blood 25-hydroxyvitamin D levels of between 30 and 40 nanograms per milliliter may be about right for reducing the risk of autoimmune diseases and certain cancers.

While he says some people can reach these levels without supplementation, many others would need to take 1,000 to 2,000 IU of the vitamin a day.

"Based on what we know, I think it is reasonable to recommend that people maintain blood levels of around 30 nanograms per milliliter," he says.

Unanswered Questions

But vitamin D researcher JoAnn E. Manson, MD, says it is way too soon to recommend taking much larger doses of vitamin D than are recommended.

Manson chairs the preventive medicine department at Brigham and Women's Hospital in Boston and is principle investigator of a large U.S. study on vitamin D.

Still in the recruitment stage, the five-year, 20,000-person study will explore the impact of 2,000 IU of vitamin D on the risks for a wide range of health conditions, including cancer, heart disease, diabetes, hypertension, and depression. The study will also examine the effects of the fatty acid omega-3.

"I think it is important that we not leap ahead of the evidence in recommending high doses of vitamin D," she says. "We will soon have a better understanding of the optimal doses of vitamin D and the optimal blood levels associated with the best balance of benefits and risks. But right now there are many unanswered questions."

SOURCES:

Alcohol Use Linked to Risk for Hormone-Sensitive Breast Cancers

From MedscapeCME Clinical Briefs

News Author: Laurie Barclay, MD
CME Author: Laurie Barclay, MD

August 27, 2010 — Alcohol use is linked to a risk for hormone-sensitive breast cancers, according to results from the Women's Health Initiative (WHI) Observational Study reported Online First August 23 and to appear in the September 22 print issue of the Journal of the National Cancer Institute.

"Alcohol consumption is a well-established risk factor for breast cancer," write Christopher I. Li, MD, PhD, from Fred Hutchinson Cancer Research Center in Seattle, Washington, and colleagues. "This association is thought to be largely hormonally driven, so alcohol use may be more strongly associated with hormonally sensitive breast cancers. Few studies have evaluated how alcohol-related risk varies by breast cancer subtype."

As part of the prospective WHI, the goal of this analysis was to determine the association between self-reported alcohol intake and the risk for postmenopausal breast cancer. Of 87,724 participants enrolled from 1993 through 1998, a total of 2944 were diagnosed with invasive breast cancer during follow-up through September 15, 2005.

Based on multivariable adjusted Cox regression models used to calculate hazard ratios (HRs) and 95% confidence intervals (CIs), along with 2-sided statistical tests, alcohol intake was directly associated with an overall risk for invasive breast cancer, invasive lobular carcinoma, and hormone receptor-positive tumors (for all, P trend ≤ .022).

The association between alcohol intake and breast cancer was stronger for certain types of invasive breast cancer than for others. The risk for hormone receptor-positive invasive lobular carcinoma in women who drank at least 7 alcoholic beverages per week was nearly double that in never-drinkers (HR, 1.82; 95% CI, 1.18 - 2.81), but the risk for hormone receptor-positive invasive ductal carcinoma was not statistically significantly increased (HR, 1.14; 95% CI, 0.87 - 1.50). The difference in HRs per drink per day among current drinkers was 1.15 (95% CI, 1.01 - 1.32; P = .042).

Absolute rates of hormone receptor-positive lobular cancer were 5.2 per 10,000 person-years among never-drinkers and 8.5 per 10,000 person-years among current drinkers. For hormone receptor-positive ductal cancer, absolute rates were 15.2 and 17.9 per 10,000 person-years, respectively.

"Alcohol use may be more strongly associated with risk of hormone-sensitive breast cancers than hormone-insensitive subtypes, suggesting distinct etiologic pathways for these two breast cancer subtypes," the study authors write.

Limitations of this study include observational design with possible residual confounding, and assessment of alcohol use only at baseline, so that extensive measurement errors or changes in alcohol use could affect the study conclusions. In addition, data on tumor characteristics were based on information abstracted from local pathology reports, which may have led to an unknown degree of misclassification because of variations in histology and hormone-receptor assessment across the United States.

"[T]his study provides prospective evidence that the relationship between alcohol use and breast cancer risk varies by breast cancer subtype, with risks most pronounced for invasive lobular and hormone receptor–positive tumors," the study authors conclude.

"Hence, alcohol is another established breast cancer risk factor that appears to be differentially associated among breast cancer subtypes, and this pattern of associated risks indicates that tumors defined by both histology and hormone receptor status have somewhat different etiologic determinants. These findings highlight the importance of incorporating breast cancer subtype information in etiologic studies of the disease."

This work was supported by the WHI program, which is funded by the National Heart, Lung, and Blood Institute, National Institutes of Health, US Department of Health and Human Services. The study authors have disclosed no relevant financial relationships.

J Natl Cancer Inst. Published online August 23, 2010.
Clinical Context

Many health risks of alcohol consumption have been well documented. The risk for breast cancer is increased in association with alcohol intake, but the association of different histologic characteristics and hormone receptor status with alcohol intake has not been well described in previous studies.

Compared with ductal carcinomas, lobular carcinomas are more frequently hormone receptor positive. Because the association between alcohol intake and breast cancer is thought to be largely hormonally driven, alcohol use may be more strongly associated with hormonally sensitive breast cancers.

Monday, September 6, 2010

Limit Fingerstick Devices to Just 1 Patient, FDA and CDC Say

From Medscape Medical News > Alerts, Approvals and Safety Changes > Medscape Alerts

Robert Lowes

August 27, 2010 — In the war against the transmission of bloodborne pathogens such as hepatitis B virus, the US Centers for Disease Control and Prevention (CDC) and the US Food and Drug Administration (FDA) recommended yesterday that fingerstick devices should never be used with more than 1 person. They issued similar guidance for point-of-care (POC) blood-testing devices such as glucometers and instruments for insulin administration.

The 2 federal agencies said the shared used of blood lancing and POC testing devices has led to a steady rise in reports of bloodborne-pathogen infections — mostly involving hepatitis B virus — during the past 10 to 15 years. Such communal bloodletting occurs in settings that range from public health fairs to physician offices, but the problem is most serious in long-term care and assisted-living facilities.

In addition to stipulating a rule of 1 fingerstick device for 1 person, the FDA and CDC recommended that when clinicians obtain blood samples to monitor blood glucose levels, they should rely on fingerstick devices that automatically disable themselves after a single use — those in which the blade retracts, for example. These single-use, auto-disabling instruments are also known as safety lancets.

The new recommendation applies to fingerstick devices that are approved for obtaining blood from multiple individuals. The FDA stated that it would soon take action to have these items labeled for use with only a single person.

Insulin Pens Also Should Not Be Shared

The FDA and CDC did not take as tough of a precautionary stand on POC testing devices such as blood glucose and anticoagulation meters. They recommended that these devices should be used with only 1 person "whenever possible." However, if it is not possible to dedicate a POC testing device to a single patient, clinicians should properly clean and disinfect the device between every use, as described in the product label.

The same concerns about preventing the spread of bloodborne pathogens extend to insulin administration. Insulin pens — which contain several doses of insulin that patients can administer to themselves — should never be shared, according to the FDA and CDC. When clinicians assign them to patients, the devices should be labeled for single-patient use.

Whenever possible, multidose vials of insulin should be limited to a single person. If not, the vials should be stored and prepared in a dedicated area away from the patient care environment and potentially contaminated equipment. Insulin should always be administered with a new needle and syringe, which likewise should never be used to administer insulin to more than 1 person, and then "disposed of immediately after use in an approved sharps container."

Physicians should wear gloves for any task that potentially exposes them to blood or body fluids, and they should change gloves between patient contacts, even when they work with patient-dedicated POC devices or single-use, self-disabling fingerstick devices.

More information about the warning is available on the Web sites of the FDA and the CDC.