Thursday, July 12, 2012

5 (Incorrect) Reasons Oncologists Avoid Bad News Talks


From Medscape Medical News > Oncology

Nick Mulcahy

July 6, 2012 — Five commonly accepted reasons that oncologists avoid discussing poor prognosis with cancer patients are all "incorrect," according to an essay published online July 2 in theJournal of Clinical Oncology.
There is one reason for avoiding such talks that "holds truth": Doctors "do not like to have these discussions" because "they are hard on us," write Jennifer Mack, MD, from the Dana-Farber Cancer Institute and Children's Hospital Boston in Massachusetts, and Thomas J. Smith, MD, from the Johns Hopkins Medical Institutions, Baltimore, Maryland.
The essayists remind their fellow clinicians that "our patients want us to have these conversations, difficult as they are for all involved."
Drs. Mack and Smith list 5 reasons that oncologists unnecessarily shy away from bad news talks:
  • Patients get depressed
  • The truth kills hope
  • Hospice or palliative care reduces survival
  • This talk is not culturally appropriate
  • Prognosis is unknowable
They then cite evidence that dispels each reason, more or less.
The field of oncology has recently gotten better at talking to patients about poor prognosis, said an expert asked to comment on the essay.
"There has been a lot of work trying to address difficult communications tasks recently," said Jim Wallace, MD, from the University of Chicago in Illinois. There has been a "trend" in "finding ways to help oncologists improve their skills," he told Medscape Medical News.
Dr. Wallace was the lead author of a 2006 study that surveyed oncologists about breaking bad news to patients,as reported by Medscape Medical News. He and his colleagues found that negative emotions were much more common than positive ones among oncologists. Specifically, 47% of respondents described negative emotions such as sadness and anxiety, whereas only 14% reported positive feelings such as optimism, hope, helpfulness, and relief.
Dr. Wallace has strong opinions about training oncologists in such communication.
This training should become mandatory.
"For oncologists who are directly involved in patient care, this training should become mandatory," he said. "Oncology should begin to explore means of evaluating this competency prior to full-time patient care."
The avoidance of bad news talk has some serious consequences, the essayists note. "Patients lose good time with their families and for reflection and spend more time in the hospital and intensive care unit," Drs. Mack and Smith write. The avoidance can be neglectful, they suggest. For example, half of all patients with lung cancer get to 2 months before death without being offered hospice.
Bad news talks are potentially cost effective. "We think this is one way we can improve care, give people more realistic choices, and reduce the rising cost of care," write Drs. Mack and Smith.
They reviewed various books and studies on the subject of discussing bad news with cancer patients, and found a host of "underlying misconceptions" among healthcare professionals. In their essay, they present the evidence they accumulated to counter these "incorrect" ideas about the effects of discussing poor prognosis.
Refuting the Reasons
Patients get depressed. The essayists say the opposite is true. "Giving patients honest information may allow them and their caregivers to cope with illness better." The evidence includes the Coping With Cancer study, in which patients who reported having end-of-life discussions had no higher rates of depression or worry and had lower rates of ventilation and resuscitation and more and earlier hospice enrollment (JAMA. 2008;300:1665-1673).
The truth kills hope. Drs. Mack and Smith say that hope can be maintained by patients even after truthful discussions about there being no chance for a cure. They cite studies of cancer patients with advanced disease in which patients were highly hopeful about their lives both before and after the disclosure of the likely prognosis. In other words, hope was a concept about life, regardless of its length, they found.
Hospice or palliative care reduces survival. The essayists note that multiple studies suggest that survival is equal or better with hospice or palliative care. For example, in a study of 4500 Medicare beneficiaries, hospice use was associated with increased survival in patients with either congestive heart failure or 1 of 5 cancers (J Pain Symptom Manage. 2007;33:238-246).
This talk is not culturally appropriate. The Drs. Mack and Smith admit that it is "true that patients of different ethnic and cultural backgrounds often have different preferences for information." But they argue that no clinician should assume a person of a particular background does not want to talk about death because some of his or her compatriots feel that way. "Physicians who want to know their patients' preferences for prognostic information should ask," they say.
Prognosis is unknowable. The essayists admit this is true — prognosis is a mystery to some degree. But they argue that "although we never know precisely how long a patient has to live, uncertainty should not be used as an excuse." They maintain that a "reasonable prognosis or range of possible outcomes" can help patients come "closer to the truth."
J Clin Oncol. Published online July 2, 2012. Abstract

Western-Style Fast Food Increasing Diabetes, CHD Deaths in Southeast Asia


From Heartwire

Michael O'Riordan


July 4, 2012 (Minneapolis, Minnesota) — Westernized fast-food restaurants are proliferating throughout Asia leading to a substantial increase in the risk of developing diabetes and coronary heart disease, research shows. In an analysis of more than 50 000 Chinese Singaporeans, those who ate fast food twice a week or more had a 27% increased risk of developing diabetes and a 56% increased risk of dying from coronary heart disease.
"For the results with type 2 diabetes, there is more of a modest association for people who ate Western-style fast food two or more times a week compared to the people who reported not eating it," lead investigator Dr Andrew Odegaard (University of Minnesota School of Public Health, Minneapolis) told heartwire . "The real strong results are with the people who are dying from coronary heart disease. We can only speculate and hypothesize because we don't have individual patient data, but we think it might be because trans-fatty acids have never been regulated in Singapore. There is documentation that these are still widely used by fast-food companies outside of North America."
The study was published online July 2, 2012 in Circulation.
Big Macs, Whoppers, and Heart Disease
The consumption of Westernized fast food, including McDonald's, Burger King, and Kentucky Fried Chicken, has rapidly expanded in developing and recently developed countries throughout the world. With the proliferation of fast-food restaurants, there has been concern among health experts that diabetes, hypertension, dyslipidemia, and metabolic syndrome, along with an increased incidence of coronary heart disease, are exported hand-in-hand with Big Macs and Whoppers.
"In the scientific press, as well as the popular press, everybody has linked eating fast food with poor health outcomes and given the composition of fast food this makes sense," said Odegaard.
However, studies directly linking the consumption of fast food with poor health outcomes are limited. Fast food didn't really take hold until the late 1980s in Singapore; the Singapore Chinese Health Study gave Odegaard and colleagues the opportunity to study the incidence of type 2 diabetes and coronary heart disease in 52 584 participants between 1993 and 1998. For coronary heart disease mortality, 1397 deaths were recorded until the end of 2009. For type 2 diabetes, 43 176 participants were included in the analysis and 2252 cases of diabetes reported during the follow-up interview, completed at the end of 2004.
Overall, individuals who ate at fast-food restaurants twice per week or more had a significantly increased risk of developing diabetes (hazard ratio [HR] 1.27, 95% CI 1.03–1.54) and dying of coronary heart disease (HR 1.56, 95% CI 1.18–2.06).
Although the researchers state that trans-fatty acids might be one reason for the increased risk of coronary heart disease death, it is just a hypothesis at this stage. Increased consumption of fast food might simply be a prominent marker of a poor diet and lifestyle, and not causal itself, they state. Sensitivity analyses performed by the group, however, which attempted to account for the overall dietary patterns, showed the associations remained statistically significant and were not altered after adjusting for overall dietary patterns, energy intake, and body mass index.
"The consumption of Western-style fast food is really growing in Asia and south and southeast Asia, in countries where there are a lot of developing economies," said Odegaard. "When you look at the stock reports and growth reports of the holding companies [for the major fast-food chains], this is their primary engine of growth. What the companies have going on in North America is steady, the market is saturated, but the real growth is in the growing economies. From a cultural standpoint--from what I've researched on the subject--it's convenient and familiar, just like in North America, but it's more of a status thing in the developing economies. They want to experience American culture, and that's a draw."

Moderate Alcohol Intake Could Help Prevent Bone Loss


From Medscape Medical News


Larry Hand
July 11, 2012 — Moderate consumption of alcohol as part of a regular healthy lifestyle may promote bone health in postmenopausal women and help them avoid developing osteoporosis, according an article published online July 9 inMenopause.
Jill A. Marrone, MS, from the Nutrition Division, School of Biological and Population Health Sciences, Oregon State University, Corvallis, and colleagues conducted a 15-day intervention trial that included postmenopausal women who were healthy, younger than 65 years, and not taking hormone therapy.
During the week before intervention, 40 postmenopausal women drank their normal amounts (average, 1.4 drinks/day) of alcohol and kept a diary. For the 14-day intervention, the women abstained from drinking any alcohol. On day 15, researchers gave a measured amount of alcohol, based on the women's previous drinking patterns, for them to drink.
During the study, the researchers took blood samples from the women before, during, and after the intervention to assess whether alcohol consumption affected bone turnover, the process through in which old bone material is removed and new bone is created. In osteoporosis, more bone is lost than replaced. Postmenopausal women are at increased risk for osteoporosis because of decreased estrogen. However, prior observational studies have shown a correlation between moderate alcohol consumption and higher bone density.
The researchers measured the women's bone mineral density (BMD) using dual-energy X-ray absorptiometry and conducted serum assays to measure for bone turnover biomarkers, the bone resorption marker C-terminal telopeptide (CTx), and the bone formation marker osteocalcin. They used a linear regression model to assess the relationship between alcohol intake and BMD, CTx, and osteocalcin.
The researchers found a positive correlation between alcohol level and BMD at the hip and upper leg area. They also saw an increase in osteocalcin and CTx during the 14-day abstinence intervention (4.1% ± 1.6% [P = .01] and 5.8% ± 2.6% [P = .02] compared with baseline, respectively). The levels of the bone turnover markers then trended down again on the morning of day 15 (−3.4% ± 1.4% [P = .01] and −3.5% ± 2.1% [P = .05], respectively) after the participants drank alcohol on the evening of day 14.
"[W]e observed a reduction in bone turnover markers 12 to 14 hours after a single administration of alcohol to values that did not differ significantly from baseline," the researchers write.
"Drinking moderately as part of a healthy lifestyle that includes a good diet and exercise may be beneficial for bone health, especially in postmenopausal women," Urszula Iwaniec, PhD, associate professor in the College of Public Health and Human Sciences at Oregon State University, and one of the study's authors, said in a news release. "After less than 24 hours to see such a measurable effect was really unexpected."
Alcohol's inhibitory effect reduces the force of excessive bone turnover associated with menopause, the researchers write. "[O]ur findings support the hypothesis that moderate dietary alcohol consumption may slow bone loss in postmenopausal women by attenuating increased bone turnover. The observed actions provide a plausible cellular mechanism for the positive association between moderate alcohol consumption and BMD observed in postmenopausal women," they note.
Limitations of the study include the lack of randomization and ethnic diversity (all but one of the women were white). In addition, as the type of alcohol consumed was not controlled, the possibility exists that some nonalcoholic component of the drinks may have influenced results.
The researchers conclude, however, that "[i]n spite of these limitations, this study demonstrates that moderate alcohol intake has rapid suppressive effects on bone turnover markers in postmenopausal women."
This study was supported by the National Institutes of Health and the John C. Erkkila, MD, Endowment for Health and Human Performance. The authors have disclosed no relevant financial interests.
Menopause. Published online July 9, 2012. Abstract

Wednesday, July 11, 2012

Women and Heart Disease


From Journal for Nurse Practitioners

Sandra A. Carey, APN-BC; Jennifer R. Gray, PhD

A Diagnostic Challenge

Posted: 06/28/2012; Journal for Nurse Practitioners. 2012;8(6):458-463. © 2012 


Abstract and Introduction

Abstract

Diagnosing and managing women who are at risk or have known coronary heart disease (CHD) continues to be a challenge, despite advancing imaging technologies and recent public awareness campaigns. This challenge persists as a result of many contributing factors that all too often go unrecognized. The aim of this article is to establish the extent and possible causes of gender disparities related to CHD and explore 2 obstacles of diagnosing women with or at risk: subclinical coronary disease and noninvasive diagnostic imaging.

Introduction

More women in the United States die from cardiovascular disease (CVD) than men do. In 2007, coronary heart disease (CHD) was the number 1 killer of women in all age groups. Nearly 1 in 3 women die from CHD, compared to nearly 1 in 5 from cancer.[1] Although women develop CHD later in their lives, women younger than 65 are also twice as likely to die from an acute myocardial infarction (MI) compared to men with comparable age and risk factors.[2] In fact, over 9,000 US women under 45 had an MI in 2009.[3] Early diagnosis increases the probability that a woman with CHD will have positive treatment outcomes and live longer.
Early diagnosis of CHD in women is difficult when many providers are unaware of the risk of a CHD death in their female patients.[4] In reality, 38.2 million US women (34%) are living with some form of CVD. The population of women at risk is even larger.[4] Advanced imaging technologies have given providers a wide range of diagnostic modes to determine CHD in women. However, accuracy and limitations of stress testing in women remain areas of significant confusion.[5]
The aim of this article is to establish the extent and possible causes of gender disparities related to CHD and explore 2 obstacles to diagnosing women with or at risk for heart disease: subclinical coronary disease and noninvasive diagnostic imaging.

Noninvasive Testing in Women

Multiple tools have been developed to identify atherosclerotic disease at its preclinical stages in order to help modify disease progression. Imaging modes have demonstrated gender differences. The high prevalence of nonobstructive CHD and single vessel disease in women results in an observed decreased accuracy in diagnostic testing that often results in higher false-positive rates.[25]Presently, the ACC/AHA guidelines recommend that women who are asymptomatic with a normal echocardiogram (ECG) undergo routine exercise stress treadmill testing as the initial screening for heart disease. Unfortunately, ECG changes during exercise are frequent with women and diminish the accuracy of the interpretation. Subsequently, specificity in the literature has been reported as low as 61% for women undergoing exercise stress testing without imaging.[26] Stress echocardiography has demonstrated better accuracy for detecting or excluding significant disease with a mean sensitivity of 81% and specificity of 86%. However, most echocardiography studies have predominantly included men. Additionally, women who are referred for exercise testing are often older or obese and unable to reach target heart rates required for accurate assessment.[26]
In patients who cannot exercise, dobutamine is the most common stress agent used for stress echocardiography. The sensitivity and specificity of exercise echocardiography in women is reported to be 80%–90% and 82%–86%. Stress echo has been reported as the most sensitive imaging mode for women. Although great strides have been made in the field of harmonic imaging and contrast echocardiography, female patients often present challenges. A significant amount of expertise is required for interpretation of images, especially in the obese or larger-breasted patient. Additionally, detection of single vessel disease found more common in women is better detected with myocardial perfusion imaging (MPI).[27] MPI with exercise or pharmacological stress has also been shown to be of value in risk stratification in women with an intermediate likelihood of CHD.[28]
Regadenoson, a new A2A adenosine receptor agonist, has become the preferred pharmacological agent. This coronary vasodilator is a single bolus intravenous injection that negates the requirement for infusion. Regadenoson stress tests are not affected by the presence of beta blockers, and total testing time has been reduced to 4 minutes.[29]
A comprehensive meta-analysis looking at the accuracy of thallium imaging (21 studies, N 5 4,113 women) revealed that accuracy in men was higher than women (sensitivity and specificity 5 85% vs 64%–78%). Breast attenuation is frequently reported as problematic as it creates a false positive and interferes with the imaging of the territory in the left anterior descending coronary artery. The advent of gated single photon emission computed tomography (SPECT) has assisted in differentiating attenuation artifacts from infarcts. Two studies using gated SPECT imaging reported an improvement of sensitivity (91%–92%) for detection of a stenosis < 50%; however, only women who had a high pretest probability for CHD were included in these studies.[28]
Coronary calcium score (CAC) assessed by cardiac computed tomography (CT) is currently being used for a method of early detection for atherosclerosis. Evidence exists that CAC is an independent predictor of cardiac events and mortality, beyond traditional risk factor assessment.[30] The data are strongest for white, non-Hispanic men. Although superior to Framingham risk index predictions, CAC scoring may fail to detect the soft or mixed plaques frequently seen in women.[30]
Coronary angiography using multi-slice spiral CT holds significant promise. Coronary CT angiography (CTA) is a safe and reliable procedure. This technology has proved to be accurate, especially in vessels without heavy calcifications that are 1.5 mm or more in diameter. The key reported advantage to this technology is the negative predictive value (95%–100%) among patients with low to intermediate risk for CAD. However, only small studies have included women (N, 400).[30] More research using coronary CTA in women with subclinical disease could provide improved strategies for identifying asymptomatic women who may benefit from more aggressive primary preventive care.

Conclusion

The number of deaths as a result of CVD in women is rising but declining in men. This disease process is responsible for overall mortality of 49% in women. This problem is expected to escalate because of the aging population, obesity, metabolic syndrome, and diabetes, as it affects women disproportionately.[31] Incomplete understanding of the pathophysiological mechanisms, such as microvascular dysfunction, as well as bias in patient care that is not seen for male patients, may explain the continued poor outcomes in women. Surveys and quality improvement initiatives have consistently highlighted the underuse of evidence-based guidelines for women.[31] Gender-based discrepancies exist in the availability, use, and accuracy in diagnostic testing in women. Nonobstructive CAD is a consistent finding in women undergoing angiography, yet performance of cardiac imaging tests is based on demand ischemia to detect obstructive disease and only facilitates accurate risk stratification for women in the near term (2- to 5-year event-free survival).[32]
A consensus needs to be reached on an appropriate algorithmic approach to diagnostic testing in women. Newer technologies that show promise for increased sensitivity and specificity for women with obstructive and nonobstructive disease need to be globally accepted as standard of care. Large longitudinal studies analyzing subclinical disease in women are needed. Improved understanding of the earlier phase of disease could lead to prevention and limit the major cardiac events that continue to ensue. This burden of disease will continue unless understanding and appreciation of gender-specific pathophysiology with regard to CHD is actualized.[32]
The role of the nurse practitioner (NP) has successfully been implemented in the cardiology community in response to alleviating gaps in heart failure management. Additionally, many NPs are first-line providers for women. As providers we excel in patient education and health promotion—arguably more successfully than our physician colleagues. The number of NPs practicing in primary care and cardiology will continue to grow. With strength in our numbers, NPs will play a pivotal role in both improved patient and provider awareness and adherence to recommended clinical guidelines for women with or at risk for CHD.

Tuesday, July 10, 2012

Is Statin-associated Cognitive Impairment Clinically Relevant?


From The Annals of Pharmacotherapy

Carlos H Rojas-Fernandez PharmD; Jean-Christy F Cameron BSc(Pharm)

A Narrative Review and Clinical Recommendations


Posted: 04/23/2012; The Annals of Pharmacotherapy. 2012;46(4):549-557. © 2012


Abstract and Introduction

Abstract

Objective: To explore the impact of statin use on cognition.
Data Sources: A literature search was performed using MEDLINE (1950-November 2011), EMBASE (1980-November 2011), and the Cochrane Library (1960-November 2011) using the search terms "cognition/drug effects," "delirium, dementia, amnestic, cognitive disorders/chemically induced," "memory disorders/chemically induced," "hydroxymethylglutaryl-CoA reductase inhibitors/adverse effects," and "hydroxymethylglutaryl-CoA reductase inhibitors." A bibliographic search on included references was also conducted.
Study Selection and Data Extraction: Studies were included for analysis if they were conducted in humans and examined the impact of statin use on cognition as either a primary or secondary endpoint; case reports and case series were also included for analysis.
Data Synthesis: Reports of statin-associated cognitive impairment were found primarily in observational studies (eg, case reports/series). One randomized controlled trial demonstrated that simvastatin impaired some measures of cognition compared to placebo. Conversely, in the majority of randomized controlled trials and observational studies, statins were found to have either a neutral or beneficial effect on cognition. Preliminary data suggest that statins that are less lipophilic (ie, pravastatin and rosuvastatin) may be less likely to contribute to cognitive impairment due to limited penetration across the blood-brain barrier. These drugs would be a logical alternative in cases where cognitive impairment secondary to another statin is suspected.
Conclusions: Despite several reports of statin-associated cognitive impairment, this adverse effect remains a rare occurrence among the totality of the literature. If statin-associated cognitive impairment is suspected, a trial discontinuation can reveal a temporal relationship. Switching from lipophilic to hydrophilic statins may resolve cognitive impairment. The vascular benefits and putative cognitive benefits outweigh the risk of cognitive impairment associated with statin use; therefore, the current evidence does not support changing practice with respect to statin use, given this adverse effect.

Introduction

Since their introduction in 1987, 3-hydroxy-3-methylglutaryl coenzyme A reductase inhibitors (statins) have become the most commonly prescribed agents for the treatment of dyslipidemia. Statins are the most effective and widely used medicines to reduce low-density lipoprotein cholesterol and reduce cardiovascular events. Statins are well tolerated and have minimal adverse effects, most commonly myopathies, effects on liver enzymes, diarrhea, and rarely rhabdomyolysis. As with all drugs, some adverse effects do not manifest in clinical trials but become evident after use in larger samples and broader patient populations. For example, several case reports and case series have suggested a potential association between statins and cognitive impairment. This possible adverse effect warrants further investigation, as it is contradictory to several studies that demonstrate a potential benefit on cognition with the use of statins. Cognitive impairment can also be considered a severe adverse effect with the potential to cause other adverse outcomes such as functional impairment.
Furthermore, the incidence of statinassociated cognitive impairment has not been clearly defined, and considering the number of patients receiving statins, even uncommon adverse effects have the potential to impact a large number of people. For example, in 2002, an estimated 7.8% of the Canadian population was taking statins, which accounts for 2,447,062 Canadians. If the incidence of statin-associated cognitive impairment were only 0.1%, it would affect nearly 2500 people in Canada. Since cognitive impairment is a potentially debilitating adverse effect, it is important to better understand this risk to adequately assess the appropriateness of statins for individual patients. Additionally, although recent reports have highlighted the risk of cognitive impairment with the use of statins, few have provided a balanced discussion of this risk with the beneficial effects of these agents on both cardiovascular outcomes and possibly cognition. It is important to present the benefits and risks of these medications so that clinicians and their patients can make informed decisions.
This article explores the potential adverse effect of statins on cognition, and considers the established vascular and putative cognitive benefits of these drugs as a balance to the risk of adverse effects.
for rest of article go to   http://www.medscape.com/viewarticle/762126_8

CDC Updates Hepatitis B Recommendations for Infected HCWs


From Medscape Medical News


Emma Hitt, PhD
July 6, 2012 — The US Centers for Disease Control and Prevention (CDC) has updated its 1991 recommendations for the management of hepatitis B virus (HBV)–infected healthcare providers and students to prevent HBV transmission.
The new recommendations were prepared by Scott D. Holmberg, MD, and colleagues from the CDC's Division of Viral Hepatitis, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention. They were published on July 6 in the CDC's Morbidity and Mortality Weekly Report.
"The primary goal of this report is to promote patient safety while providing risk management and practice guidance to HBV-infected health-care providers and students, particularly those performing exposure-prone procedures such as certain types of surgery," Dr. Holmberg and colleagues write.
"Because percutaneous injuries sustained by health-care personnel during certain surgical, obstetrical, and dental procedures provide a potential route of HBV transmission to patients as well as providers, this report emphasizes prevention of operator injuries and blood exposures during exposure-prone surgical, obstetrical, and dental procedures," they add.
According to the authors, as with the previous guidelines, HBV infection should not disqualify individuals from practicing medicine.
Changes to the previous recommendations include the following:
  • No requirement is now needed to prenotify patients of a healthcare provider's HBV status.
  • HBV DNA serum levels rather than hepatitis B e antigen status should be used to monitor infectivity.
  • An HBV level of 1000 IU/mL (5000 GE/mL) or its equivalent is an appropriate threshold that determines whether a healthcare provider performing an "exposure-prone" procedure requires an expert panel oversight.
  • Monitoring should be conducted with an assay that can detect levels as low as 10 to 30 IU/mL.
  • For most providers and students with chronic hepatitis B who conform to current standards for infection control, HBV infection status alone does not require any curtailing of their practices or supervised learning experiences.
The guidelines address several issues, including precautions and preventive strategies that should be followed when HBV-infected healthcare providers and students treat patients. Such strategies include work practice and engineering controls. The guidelines also discuss ethical considerations.
All healthcare providers and students should receive a hepatitis B vaccine, with the 3-injection series, according to current CDC recommendations.
MMWR Morb Mortal Wkly Rep. 2012;61(RR03):1-12. Full text

Different Dose for Different Folks: Some Guidance


From Medscape Cardiology > Black on Cardiology

Henry R. Black, MD; Domenic A. Sica, MD
Posted: 07/05/2012
Dr. Black: You gave a very nice talk at our recent American Society of Hypertension meeting about the differences in pharmacodynamics and pharmacokinetics between men and women. It is a very intriguing idea that made me wonder whether we should have different doses for women than we do for men. Should we do it on a per kilogram basis? Should we do it on the basis of waist-to-hip ratio? What do you think about that?
Dr. Sica: It's a complex topic that you need to always break down to the fundamentals. When you look at a female, the general belief is that body stature is smaller, thus the volume of distribution for a drug would be less. The metabolic capacity in the liver is finite, based on body size, so you would have less capacity.
Many women have normal renal function, but a caveat of interpreting that phrase would be that normal renal function still has a range. If a man had a glomerular filtration rate (GFR) of 120 mL/min/1.73 m2 and a woman had a GFR of 90 ml/min/1.73 m2, both would be in the normal population range. However, the female would have a value that is 25% less than the male within the normal range. It is as if there is a lesser component of all the mechanisms by which a drug has cleared from the body and a smaller space where the drug, which was taken by mouth, distributes itself into. As a result, you typically see higher blood concentrations.
The devil is in interpreting that, because for most of the drugs we use, when they are given at a 15%-20% higher dose, they are unlikely to cause a significant change in response. If you give a patient a medicine each day for a desired pharmacologic effect to treat a disease, there is wide inter- and intrapatient variability that occurs that is natural in how we respond to a medicine, which dwarfs the inherent change you might see from the kinetic difference.
Dr. Black: In children, we do it per kilogram. We stop doing that in adolescence, when we start dosing them like adults. Are you saying that even though there are clear differences in the blood levels potentially achieved, what really matters is whether the therapeutic effect is there? It isn't worth the time and expense of measuring blood levels and adjusting doses in men vs women?
Dr. Sica: Yes. Therapeutic drug monitoring is of little help. Empirically, if you are a good clinician and your patient weighs 90 lb and another patient weighs 180 lb, the dose given is going to be less. The concentration-dependent side effects, which reflect the prevailing blood level, will be higher in the 90-lb person than the 180-lb person. That's natural.
When we talk about gender, unfortunately, we have to talk about body weight because there are women who weigh 90 lb and some who weigh 180 lb. Each would require different dosing considerations.
Here is a teaching point. If you pick a dose of a drug at the low end of the dose range, then you will give it to someone who is 90 lb; when you give that to them, they may achieve a therapeutic level at a low dose.
Dr. Black: It's not just the level, but the response, that you are looking for, right?
Dr. Sica: The response, which relates to the dose -- the blood levels that are seen. With a smaller body size, a smaller dose may get the desired effect. With a smaller body size, if you give a high-normal dose as you titrate up, you may reach toxic concentrations more readily. You almost have to position yourself to see where the drug apportions itself when it enters the body when there are smaller spaces for it to distribute into. I agree that maybe dosing should be curtailed or cut back for people with very low body weight.
The converse is of equal importance. If you have a 300-lb individual and you give him a conventional dose, you may not get the desired effect because you have not filled up all the compartments in the body necessary to get the drug action. It's a very complex algorithm. Male vs female, there is little difference. It's body size and the GFR issue that I spoke of that become important.
Here is another teaching point. If we treat someone for methicillin-resistant Staphylococcus aureus (MRSA) with vancomycin or if we use gentamicin for a gram-negative infection in a pregnant patient, we must remember that pregnancy is accompanied by a 30% increase in GFR. The patient hyperfilters throughout pregnancy, so if you give a drug that is mainly renally cleared, it is going to be filtered out in abundance, and you will reach subtherapeutic levels clinically very quickly. You have to be mindful of normal range in the high-normal GFRs in interpreting dosing considerations.
Dr. Black: It seems to me that in the end, it's whether you get the therapeutic effect that you are looking for, not so much where you go to do that. You're not going to look for a blood level as much as to cure the infection, reduce the blood pressure, or fix the lipids.
Dr. Sica: If you don't get the desired therapeutic effect, which happens in probably 1 in 4 patients, you have to step back and ask whether you are achieving adequate blood levels. The other 3 out of 4 patients are probably using enough of the drug, and lack of effect is related to patient compliance and adherence. But for 1 in 4 patients, it is an absorption issue (rate and extent of absorption), a body size issue, or a GFR issue. You have to be able to understand and break those down in nonresponders, because it is not always about them not taking the medicine; it's us not delivering the medicine correctly to them.
Dr. Black: We shouldn't blame the patient, as we always seem to do?
Dr. Sica: Sometimes, interestingly, patients blame themselves, because they are put off by the fact that we are accusatory with our body language, and we're not really doing that as clinicians. We don't send the right signals when we don't get the desired response as a doctor.

Preoperative Skin Antiseptic Preparations for Preventing Surgical Site Infections A Systematic Revi


From Infection Control and Hospital Epidemiology

Abstract and Introduction
Chris Kamel, MSc; Lynda McGahan, MSc; Julie Polisena, MSc; Monika Mierzwinski-Urban, MLIS; John M. Embil, MD, FRCPC

Posted: 06/29/2012; Infect Control Hosp Epidemiol. 2012;33(6):608-617. © 2012 

Abstract

Objective. To evaluate the clinical effectiveness of preoperative skin antiseptic preparations and application techniques for the prevention of surgical site infections (SSIs).
Design. Systematic review of the literature using Medline, EMBASE, and other databases, for the period January 2001 to June 2011.
Methods. Comparative studies (including randomized and nonrandomized trials) of preoperative skin antisepsis preparations and application techniques were included. Two researchers reviewed each study and extracted data using standardized tables developed before the study. Studies were reviewed for their methodological quality and clinical findings.
Results. Twenty studies (n = 9,520 patients) were included in the review. The results indicated that presurgical antiseptic showering is effective for reducing skin flora and may reduce SSI rates. Given the heterogeneity of the studies and the results, conclusions about which antiseptic is more effective at reducing SSIs cannot be drawn.
Conclusions. The evidence suggests that preoperative antiseptic showers reduce bacterial colonization and may be effective at preventing SSIs. The antiseptic application method is inconsequential, and data are lacking to suggest which antiseptic solution is the most effective. Disinfectant products are often mixed with alcohol or water, which makes it difficult to form overall conclusions regarding an active ingredient. Large, well-conducted randomized controlled trials with consistent protocols comparing agents in the same bases are needed to provide unequivocal evidence on the effectiveness of one antiseptic preparation over another for the prevention of SSIs.

Introduction

Surgical site infections (SSIs) occur in approximately 2%–5% of patients who undergo clean extra-abdominal surgeries, such as thoracic and orthopedic surgery, and in up to 20% of patients who undergo intra-abdominal surgery interventions. SSIs can lead to increased morbidity and mortality and are associated with prolonged hospital stay and greater hospital costs. The Institute for Healthcare Improvement reports that SSIs in the United States increase the length of hospital stay by an average of 7.5 days, at an estimated cost of $130 million to $845 million per year. In 2006, SSIs accounted for 14% of healthcare-associated infections in the United Kingdom, resulting in additional costs of between £814 and £6,626, depending on severity.
Because microbial contamination of the surgical site is a requirement for the development of an SSI, prevention techniques aim to minimize the presence and spread of microorganisms. Prevention strategies include antibiotic prophylaxis, antiseptic prophylaxis, hair removal, perioperative glucose control, and maintenance of normothermia.Topical antiseptics may be applied to the skin preoperatively to reduce SSI risk. The main types of antiseptics are iodine or iodophor (such as povidone-iodine [PI]), alcohol, and chlorhexidine gluconate (CHG). CHG and PI can be mixed with either alcohol or water, which may have implications for effectiveness.
The Centers for Disease Control and Prevention (CDC) guidelines recommend that patients shower or bathe with an antiseptic solution the night before surgery and that the skin be prepared with "an appropriate antiseptic agent.” Clinical practice guidelines from the National Institute for Health and Clinical Excellence recommend that patients shower or bathe with soap the day before or the day of surgery and that iodophor-impregnated surgical drapes be used when incise drapes are required. They also recommend preparing the skin at the surgical site with antiseptic immediately before incision, but they do not indicate a preference for CHG or PI.
We conducted a systematic review of the available published data on the comparative clinical effectiveness and safety of preoperative skin antiseptic preparations for preventing SSIs. This review is an update of a comprehensive report by the Canadian Agency for Drugs and Technologies in Health.[7]
 

Cranberry Products May Prevent Urinary Tract Infections


From Medscape Medical News

Troy Brown


July 9, 2012 — Cranberry-containing products may protect against urinary tract infections (UTIs), according to a recent meta-analysis. However, because there was substantial heterogeneity among the studies, the results of the analysis should be viewed with caution, the researchers say.
Chih-Hung Wang, MD, from the Department of Emergency Medicine at National Taiwan University Hospital and National Taiwan University College of Medicine in Taipei and colleagues report their findings in the July 9 issue of the Archives of Internal Medicine.
One of the most common bacterial infections, UTIs affect up to 40% to 50% of women at least once in their lifetimes, the authors report. Pregnant women, the elderly, and patients with neuropathic bladder are also at increased risk for developing UTIs.
Cranberry (genus Vaccinium, including the species V oxycoccus, V macrocarpon, V microcarpum, and V erythrocarpum) is a folk remedy that has been used for years to relieve UTI symptoms. Cranberry was originally thought to work by acidifying the urine, but its effects are now known to be due to its interference with the attachment of bacteria to uroepithelial cells. In fact, A-type proanthocyanidins were identified in cranberry in 1989 as compounds with the potential to inhibit the adherence of P-fimbriated Escherichia coli to the urogenital mucosa.
Several new studies have been published since the last meta-analysis on this issue. Therefore, Dr. Wang and his team wanted to reevaluate the effectiveness of cranberry products for preventing UTIs and to study factors that influence their effectiveness.
Dr. Wang and colleagues analyzed 10 trials with a total of 1494 participants (794 in the cranberry groups and 700 in the control groups) in the meta-analysis. Significant heterogeneity was found among trials (relative risk [RR], 0.68; 95% confidence interval [CI], 0.47 - 1.00) (I2 = 59%). One trial was excluded from the main analysis because it had the most significant effect on the pooled summary estimate.
Cranberry Products Appear Effective
Heterogeneity decreased after exclusion of this trial, and cranberry-containing products appeared to effectively prevent UTIs (RR, 0.62; 95% CI, 0.49 - 0.80) (I 2 = 43%).
According to sensitivity analysis, the pooled summary estimate was stable to risks of bias in random sequence generation, study characteristics, and definitions of UTI, but the protective effect was much stronger in 2 studies that didn't use placebo in the control group (pooled RR, 0.36; 95% CI, 0.21 - 0.62).
Certain Populations Seem to Benefit More
Subgroup analysis found that some populations seemed to experience higher protective effects, including women with recurrent UTIs (RR, 0.53; 95% CI, 0.33 - 0.83) (I 2 = 0%), female populations (RR, 0.49; 95% CI, 0.34 - 0.73) (I 2 = 34%), children (RR, 0.33; 95% CI, 0.16 - 0.69) (I 2 = 0%), cranberry juice users (RR, 0.47; 95% CI, 0.30 - 0.72) (I 2 = 2%), and people using cranberry-containing products more than twice daily (RR, 0.58; 95% CI, 0.40 - 0.84) (I 2 = 18%). The findings, though, were not statistically significant in meta-regression.
Cranberry juice was more effective than cranberry capsules. The authors write that this could be due to increased hydration from drinking the juice or the effects of unknown substances in the juice. The authors acknowledge that drinking large quantities of juice could be problematic for some individuals, including patients with diabetes and those with gastrointestinal issues.
A dosing frequency of more than twice daily was more effective. "Because in vitro data have suggested that the antiadhesion activity of cranberry juice on fimbriated E coli lasts for approximately 8 hours after ingestion, dosing more frequently than twice daily may be a reasonable choice," write the authors.
Caution Needed
"[T]he results of the present meta-analysis support that consumption of cranberry-containing products may protect against UTIs in certain populations. However, because of the substantial heterogeneity across trials, this conclusion should be interpreted with great caution," the authors conclude.
The authors have disclosed no relevant financial relationships.

Meditation, Exercise May Decrease Cold Symptoms


Emma Hitt, PhD


July 9, 2012 — Training in mindfulness meditation and sustained moderate-intensity exercise appear to be associated with reduced illness severity and fewer days of missed work because of acute respiratory infections (ARIs), compared with doing nothing, according to the findings of a randomized trial.
Bruce Barrett, MD, PhD, from the Department of Family Medicine at the University of Wisconsin, Madison, and colleagues report their findings in the July/August issue of the Annals of Family Medicine.
According to the researchers, enhancing general physical and mental health might reduce ARI burden.
"Evidence suggests that mindfulness meditation can reduce experienced stress and negative emotions," Dr. Barrett and colleagues write. "Similarly, both epidemiological and experimental studies have suggested that regular exercise may protect people from ARI illness."
The researchers sought to evaluate the ability of meditation or exercise to reduce the incidence, duration, and severity of ARIs in adults 50 years and older.
A total of 154 participants were randomized to 1 of 3 study groups approximately equal in size; 149 completed the trial. One group received 8 weeks of training in mindfulness meditation, another group received 8 weeks of training in moderate-intensity sustained exercise, and the third group served as an observational control group.
The standardized 8-week mindful meditation course involved group sessions (2.5 hours weekly) and at-home practice (45 minutes daily). The exercise intervention was similar to the meditation course in terms of time and location, but participants focused on achieving moderate-intensity sustained exercise, with a target rating of 12 to 16 points on a 6- to 20-point scale.
In the meditation group, there were 27 ARIs, resulting in 257 days of illness. In the exercise group, there were 26 ARIs, resulting in 241 days of illness; and in the control group, there were 40 ARIs, resulting in 453 days of illness.
ARI severity, measured on the Wisconsin Upper Respiratory Symptom Survey (WURSS-24), was 144 in the meditation group, 248 in the exercise group, and 358 in the control group.Global severity was significantly lower in the meditation group than in the control group (P = .004). Duration of illness in the meditation group trended toward statistical significance, compared with the control group (P = .034). Severity and duration of illness were lower in the exercise group than in the control group, although the differences were not statistically significant (P = .16 and P = .032, respectively). The researchers had designated a P value of .025 as the cutoff for the rejection of the null hypothesis.
A total of 67 days of work were missed because of ARIs in the control group, 32 in the exercise group (P = .041), and 16 in the meditation group (P < .001). Viruses were identified from nasal washes in 53.8% of samples from the meditation group, 42.1% from the exercise group, and 54.3% from the control group. Neutrophil counts were similar in the 3 groups, whereas slightly higher interleukin-8 levels were detected in the meditation group than in the control group (P = .022).
"This ground-breaking randomized trial of meditation and exercise vs wait-list control among adults aged 50 years and older found significant reductions in ARI illness," Dr. Barrett and colleagues conclude.
The researchers note that one of the limitations of this study is that "participants in such a trial cannot be blinded to behavioral training interventions, thus allowing for the possibility of self-report bias."
However, they add that if "these results are confirmed in future studies, there will be important implications for public and private health-related policy and practice, as well as for scientific research regarding mechanisms of health maintenance and disease prevention."
The study was supported by grants from the National Institutes of Health (NIH), including the National Center for Complementary and Alternative Medicine (NCCAM) and the Clinical and Translational Science Award (CTSA) Program of the National Center for Research Resources. 
Ann Fam Med. 2012;10:337-346. Abstract