Empathy as a way to resolve our problems.

There is a contentious atmosphere that pervades our societies with all the prevalent anger, frustration and fear, and should be of concern for everyone.   Older members of our society grew up with much change with social upheavals, environmental calamities, ecological safety thresholds being crossed, and the threat of nuclear war, but to all intents and purposes the social future was somewhat predictable.  Those now in their 20’s, 30’s and 40’s are now living in a world where the future in all areas is entirely uncertain and even potentially dire outcomes are being predicted.  The world we are bequeathing to our children, let alone our grandchildren, is one apparently devoid of empathy.    

To think about what it might be like to live in a war-zone with the threat of bombings, gun-fire, air-raids, and the intrusion of foreign troops in one’s home and businesses would rattle many people’s perceptions concerning the benign nature of military intervention.  Massive numbers of war refugees are becoming normal.  Add to that melee the problem of even larger numbers of ecological refugees and it is not surprising that thoughts of impending Armageddon are prevalent.  One of the symptoms of more developed nations has been one of developing isolationism as a strategy to isolate themselves from the ecological problems, which do not stop at national boundaries.  And if people of all nations began to empathize on a global level with the ecological system that is the planet-at-large, they might begin to wonder whether the caustic conditions of modern consumer-based society have in some way been caused by humans disconnected and displaced from the natural world.

A sad symptom of modern consumerism is that it prevents people who benefit from global capitalism empathizing with those who are the burden bearers of global Environmental and Social Problems. They never think what it might be like to live in abject poverty while local resources are shipped elsewhere to propagate another nation’s monetary wealth.  If More Developed Countries were genuine in their attempts to be empathic, the need for global scale social change would be self-evident.  Currently, it takes doomsday-like catastrophes, such as hurricanes, Tsunamis, major droughts and flooding’s, famines, and nuclear power station problems (e.g. Chernobyl and Fukushima) to highlight the plight of the most disadvantaged people.  Yet, the speed at which these disasters are forgotten is an unfortunate indication of how little empathy actually exists. 

Relearning the gift of empathy is an important first step in comprehending and resolving the tremendous problems that we as humans must come to grips with if we are to successfully adapt to the global chaos now manifesting. It is a step that means leaving the shelter of one’s own experiences so as to enter the minds of others and to entertain different points of view.  Empathy is the pathway towards compassion and provides a compass towards deeper truths of human experience. Through empathy, every person can come to see the false allure of consumerism and the myth that we can be happy by ourselves, alone and separate from the minds and hearts of other people.  The human disconnect from, and treatment, of the natural world is a mirror of how we treat each other.  

First, some definitions to get us all on the same page.

  • Apathy is a state of indifference —an absence of interest or concern to certain aspects of emotional, social, or physical life. It is a common reaction to stress where it manifests as learned helplessness – I don’t care.
  • Antipathy is dislike for something or somebody, the opposite of sympathy. While often induced by previous experience, can exist without rational cause-and-effect explanation.
  • Sympathy is a social affinity in which one person stands with another person, closely understanding his or her feelings – more than simply the recognition of another’s suffering, sympathy is actually sharing another’s suffering, if only briefly.
  • Empathy is the capacity to recognize or understand another’s state of mind or emotion – the ability to ‘put oneself into another’s shoes’ – a skill greatly diminished in modern society.
  • Compassion is a profound human emotion that gives rise to an active desire to alleviate another’s suffering.

The emphasis on the constant acquisition of material goods in today’s consumer lifestyle cheapens human relationships – it trains people to see others as mere stepping stones to some selfish sense of satisfaction.  Everyday interactions with other people – everyone from close friends to strangers on the street – are seen only as valuable to the extent that they can produce some benefit.  This has led to a condition I call ‘hyper-individualism.’ Many contemplative traditions speak of loving-kindness as the wish for happiness for others and of compassion as the wish to relieve others’ suffering. It has been found that most effective leaders are alike in one crucial way; they all have a high degree of what has come to be known as emotional intelligence.

Emotional intelligence is the ability to successfully manage ourselves and our relationships, and encompasses five basic skills:

  1. Self-awareness: the ability to recognize and understand your moods, emotions, and drives, as well as their effect on others.
  2. Self-regulation: the ability to control or redirect disruptive impulses and moods.
  3. Motivation: a passion to work for reasons that go beyond money or status.
  4. Social skill: means proficiency in managing relationships and building networks, an ability to find common ground and build rapport.
  5. Empathy: the ability to understand the emotional makeup of other people, skill in treating people according to their emotional reactions.

It’s important to emphasize that building one’s emotional intelligence cannot – will not – happen without sincere desire and concerted effort and one’s own enthusiasm to create change and successfully communicate and understand the feelings of all people involved – this is about true empathy and not about trying to manipulate emotions for some political goal. 

Nothing great can be achieved without enthusiasm, so if one wants to become really empathic, then high emotional intelligence must be developed, which means developing ‘Trust’ and understanding the other person’s perspectives and worldview. Part of the ’empathy process’ is establishing trust and rapport, which helps us to have sensible ‘adult’ discussions. Establishing trust is about listening and understanding without judging – not necessarily agreeing (which is different). A useful focus to aim for when listening to another person is to try to understand how the other person feels, and to discover what they want to achieve.  It seems obvious, but ‘Listening,’ of all the communications skills, is arguably the one which makes the biggest difference in finding common ground for relationship solutions, whether it is personal, civic, or political. 

Listening does not come naturally to most people, so we need to work hard at it; to stop ourselves ‘jumping in’ and giving our opinions.  Mostly, people don’t listen – they just take turns to speak – more interested in announcing our own views and experiences than really listening and understanding others.  We all like to be listened to and understood – when we are understood we feel affirmed and validated.  Yet, as I just said, we do not readily make others feel validated as we often become fixated in pushing our own perspectives without considering that others also have their perspectives.  This is where empathy becomes a most valuable trait and tool in understanding and resolving differences equitably.   

Active listeningis responding or doing something that demonstrates that you are listening and have understood what others are saying. Giving non-verbal cues to demonstrate you are paying attention (nodding, making eye contact, or making facial expressions appropriate to what is being said) shows that you care about what is being said.  Also, reflecting back the main points and summarizing what has been said helps build rapport and ensure that what is being shared is clearly understood.

Problems arise because we close out connections.  These are really a form of ‘anti-community,’ because we focus on material values, which are against personal connections and community.  Our sense of community has been severely eroded to the point that we are many people feel like they live in ‘bubble cultures.’ This is further worsened by the way society pigeonholes people and separates us all thereby forcing people to make false assumptions about others and heightens antipathy.   

To the four horsemen of the apocalypse (Conquest, War, Famine and Plague) roaming loose in our modern world, I would add apathy and antipathy.  And all this adds up to a huge cloud of fear that further reduces our willingness to connect. We are hard-wired to care and connect with people, which when we do this creates a reason for caring, cooperation, and desire to be of service to others.  This creates inclusionary cooperation with EMPATHY and COMPASSION.  And why is this so critical now?  Because unless we start practicing empathy, we will fail.  It’s easy to think you can be a survivalist and ignore people, but in the end we all sink or swim together.  And for me I’d rather swim to new shores of hope and the promise of a better world. 

Why we should ALL support Renewable Energy 7 – Quality of Life and Health part 4 – Health and Fossil Fuel Use

One unusual factor that is noticeable everywhere during the Covid-19 lockdown is the improvement in the air quality, because of the high decrease of fossil fuels used in transportation.  While we may be concerned about a microscopic virus killing people, that same concern should be as prevalent for the quality of air in which fossil fuel combustion is as harmful to all of us, but especially to susceptible populations.  It is estimated that 230,000 people in the USA and 3.61 million people worldwide die each year from fossil fuel pollution related problems.  Why are we not scared about those death numbers?

Power plant emissions represent the largest sources of mercury present in the air, which then settles onto the ground and runs off into water sources. Fossil fuel transportation is also responsible for releasing substantial amounts of nitrogen oxides and carbon monoxide. Regardless of how pollutants are emitted and from what source, there are many compounds associated with burning fossil fuels that can affect human health – many of these conditions can be avoided or abated by reducing exposure and shifting to cleaner methods of energy production.  Int J Environ Res Public Health. 2018 Jan; 15(1): 16.

It is estimated that in the United States $100 billion is spent annually in health-related damages.  Burning FFs is much more than just release of carbon dioxide, it is also the release of primary and secondary pollutants such as fine particulate matter, ozone, sulfates, formaldehyde and benzene. The particulates are so small they are easily breathed in and lodge deep in the lungs.  Industrial processes such as incineration, smelting, and mining release sulfur dioxide, which can permanently damage the lungs, while lead is a known trigger of brain and nervous system damage.  There are 10 main health problems that are commonly associated with fossil fuel combustion.

Asthma – one of most common aliments affecting more than 300 million people (nearly 4%) globally per year.  Asthma is an inflammation and narrowing of the airways in the lungs such that sufferers have great difficulty breathing, even becoming so severe that airway obstruction leads to medical emergencies.  The prevalence of Pneumonia is seen more requiring hospitalization, especially in those already medically compromised with other medical conditions, but the young and aged are particularly at risk. 

Bronchitis – Acute and chronic bronchitis can be caused by fossil fuel particulates. Exposure to nitrogen oxides, especially in young children, can trigger airway inflammation associated with coughing, fatigue, and fever. Nitrogen dioxide is a known lung irritant and industrial pollutant and is regulated by the US-EPA.

Upper Respiratory and Eye Irritation – Ground level Ozone, is known to cause significant eye and throat irritation, as well as the lining of the nose and throat among a range of respiratory problems. Hydrogen chloride, hydrogen fluoride, and other acidic gases created from fossil fuel emissions also do so as well. A mixture of nitrogen oxides and sulfur dioxides with atmospheric elements can create acid rain, which harm trees, fish, and wildlife.

Heart Attack – Particulates from burning coal are five times more harmful to the heart than burning other fossil fuels.   Mercury, arsenic, selenium, and other toxins are so small they can be absorbed into the bloodstream. The particles can accumulate on arterial linings and even to fatty deposits already there, triggering a heart attack.

Heart Disease – Pollutants from factories, power plants, and refineries cause numerous compounds such as ozone that cause inflammation in the cardiovascular system stimulating or cardiac problems in people already susceptible to heart conditions.  Research shows that Mercury emissions (common with coal burning) has been associated with thickened arteries and high blood pressure. 

Neurological Problems – Mercury emissions from burning coal and common with cement factories and boiler emissions causes developmental and behavioral problems.  Mercury is particularly in lakes, streams, and oceans where it can get into fish, which humans consume. It has been connected with attention deficit hyperactivity disorder, lower IQ, and impaired memory and motor skills. In the United States.  It is also believed that  mercury exposure via a mothers exposure negatively affects over 300,000 fetuses each year through the placenta.

Cancer – Polycyclic aromatic hydrocarbons (mutagens and carcinogens) cause elevated incidences of cancer beginning with fetus exposure through the placenta. Inhalation of toxic organic compounds and chemicals by anyone of any age can increase the risk of lung cancer. Benzene, formaldehyde, cadmium, arsenic, manganese, and lead are known carcinogens, and dioxins are well associated with lymphomas, soft tissue sarcomas, and stomach carcinomas.

Organ Damage – Brain, liver, and kidney damage are known to occur with mercury exposure. Even if people are not directly exposed to it, this toxic metal is often present in foods.

Immune System Problems – Most of the FF emissions such as aromatic compounds, dioxins, heavy metals, lead, and hydrocarbons cause immune system problems.  This is particularly problematic in young children that have immature immune systems and immuno-compromised people. Any bacteria or viral pathogens are more likely to negatively affect such people with less capacity to naturally immune-resist such infections. 

Why we should ALL support Renewable Energy 6 – Quality of Life and Health part 3 – Reasonable Regulations

If you agree with a regulation, it is reasonable, but if you disagree, it is unreasonable.  And reasonable versus unreasonable all hinges on perception and acceptance or non-acceptance of a risk or hazard (see previous post).  If I told you about a drug on the market that helps prevent heart attacks and strokes but also kills more than 3000 people every year would you be supportive of it?  As long as you are not one of the people who die from it, you might be, but you don’t know until you take it.  So, is 3000 out of 330 million acceptable in a drug policy? At some point we need to draw a line that separates harm from safety.  Where that line is drawn is extremely complex.  The rational for what is acceptable can vary wildly from person to person, usually depending on where any person’s beliefs stand on any given issue.  And while no harm at all is certainly preferable, it is not realistic.  So, how do we decide on where to draw that line when considering risk evaluation for the health and safety of the whole community.

At this time the whole world is gripped with concern (and fear) about the Coronavirus.  It seems to be a highly transmissible virus and extreme measures have been enacted to combat its spread.  Most populations in all countries seem ready to comply with social distancing regulations and lockdowns because they see them as necessary to safeguard everyone from this virulent disease.  This virus is an acute problem because it can be seen immediately.  Strict regulations to combat chronic problems, however, are less acceptable because the immediate cause is not seen as readily because the effects are only seen over a long period of time. Air pollution is one such tricky problem because it chronic in its effects. That is, it takes years to recognize when health is being adversely affected and almost as long to conclude it is the air pollution that is causing the health problems.  One of the problems from air pollution is that we cannot easily see it or its affects.  To many people who focus primarily on money and profits from industries causing pollution, regulations are seen as onerous intrusions into business practices that create profits for stockholders.  These profits come at the expense of everyone else exposed to the pollution.  We can agree on where to draw the line for acute risk problems since in general, they tend to be relatively short term, but disagree on where that line ought to be for long-term chronic risk problems. 

The USA has a regulatory policy of innocent until proven guilty when it comes to risk protection.  That is something that might be risky is allowed until it is shown to be an undoubtable hazard.  Most of the rest of the developed world believes in the ‘Precautionary Principle’ where a potential risk has to be shown relatively benign (or harmless) before it is released.  Ultimately, regulations get enacted to safeguard people for any specific risk problem.  “Innocent until proven guilty” – Industry can introduce any products it wants.  Government bears the burden of proof to show if products are dangerous.  Precautionary principle – Industry cannot introduce a product until it is very thoroughly tested and shown convincingly to be harmless or the least harmful of all other options.

Most of the U.S. environmental policy was enacted during the 1970s following the National Environmental Policy Act (NEPA) and the formation of the EPA.  Prior to the 1970s, the negative effects of pollution were widespread and visible.  The 1964 landmark publication of Silent Spring by Rachel Carson revealed the casualness with which the industrial powers considered pollution.  It was seen a merely a byproduct of creating a higher Standard of Living (SOL).  What was not talked about in higher circles was the drastic drop of Quality of Life (QOL) factors from pollutions effects.  Even today, the discussion is on SOL and QOL is erroneously equated as being the same but has been decreasing for many decades because of multiple levels of pollution.

One of the biggest problems with laws and regulations is their interpretation and enforcement.  For instance, NEPA was a short four-page over-arching law enacted by congress.  The Interpretation of that law to create the outline for the rules and regulations was 52 pages long, and the rules and regulations that form the basis for how the EPA functions runs to several volumes.  Then put all these regulations into the hands of bureaucrats and you get the inevitable red-tape that makes people want to scream.  Enforcing them takes more technocrats who know what to do and the process gets so complicated its little wonder so many people feel frustrated with laws that are meant to protect us.  It takes a lot of funding to make the whole process work as smoothly as possible.  Two techniques promoted by the industrial lobbyists to convince politicians to de-regulate are to simply ignore the regulations at various levels of enforcement or to defund the enforcement.  The first leaves the agencies open to never ending lawsuits and the second just means nothing gets done since there are too few regulators to actually do the regulating.    

I started this post by asking whether regulations are reasonable or unreasonable.  We get so bogged down in arguing about the problems of a regulation, or how it is enforced that we forget why the regulation was enacted in the first place.  These laws are set up to protect us from harm and to minimize risk.  Instead of arguing against the law, maybe we should be debating how we can better help them be enforced more effectively to protect people and to help business still function efficiently.   After all, isn’t QOL everyone’s thing!  

Why we should ALL support Renewable Energy 5 – Quality of Life and Health part 2 – Risk Analysis

If we invested in, and used, non-polluting technologies and fuels, then we would not need rules, regulations or risk analyses to keep us healthy and safe.  We would be living in a world in which the only risk we faced would be the ones that come through accidents or where we consciously indulged in risk activities like sports.  The fact that we currently have so many rules and regulations and risk processes to minimize problems shows clearly how far we have come in unwanted acceptance of these serious problems.  And the crazy fact is that we do not need to accept these risks imposed up on us if only we would make choices that are for a better quality of life instead of just a higher standard of living. 

Acceptance comes from our personal perception of our individual reality based on our experiences – perception is not something abstract, it is as real as anything in life.  But personal experiences are misleading.  When we have not personally experienced a bad outcome, we feel it is more rare and unlikely to occur than it actually might be.  We have an exaggerated view of our own abilities to control our fate – some feel they can avoid hazards because they are wiser or luckier than others.  Technologies we trust or that we are employed within, we tend to underestimate the risks, while technologies we distrust or don’t like we give higher overestimations of their dangers.    

As an example of how we misunderstand real risk versus perceived risk, consider now how risky you perceive the following hazards, and compare your perception to the actuarial number with it that indicates the loss of number of days of life expectancy: Smoking 20 cigarettes per day (2,370); Heart disease from Lifestyle choices (1,607); Cancer (1,247); Overweight by 15% (777); Automobile accident (207); Homicide (93); Home accident (74); Drowning (24); Fire (20), and Airplane accident (3.7).  If just losing a few days seems OK to you, then let’s reframe the hazards differently: You have a 1 in 5 chance of death from Heart disease, a 1 in 7 chance of death from cancer, and a 1 in 24 chance of stroke, all related to lifestyle choices.  Yet, death from a firearm is 1 in 314, from drowning 1 in 1,006, airplane accident 1 in 5,051, and lightening 1 in 79,746.  Accidents tend to be rare events but we fear them more than we apparently do the toxins to be found in the air, water and food we ingest all day, every day, that are more hazardous than we seem to realize.        

Risk assessment is a technique that helps us identify risks, determine the statistical probability or likelihood of their occurrence, and then assess the potential severity of the effects should a risk occur.  This allows us to create policy for the potential economic, health, social, and environmental costs of any hazard or risk situation.  The problem with risk analysis and policy is that it also needs to include the perception of risk, known as ‘outrage.’  Sometimes people are outraged over something that is not a real problem, but they demand action anyway.  And other times people accept something as non-risky, when in fact it is extremely dangerous.  How a risk is portrayed in the mass media or through social media can determine how that risk (real or imagined) is perceived!    So, Risk = Probability X Outrage. The mathematical probability that some harmful outcome will result from a given action, event, or substance.  Probability = a quantitative description of the likelihood of a certain outcome, while Outrage = the public reaction, in which outrage may need to be increased or decreased. 

In an ideal situation, situations or products with risks and hazards are tested to determine scientific results that can be quantified and probabilities determined – this is risk assessment.  The political, social, economic, and ethical aspects are considered, which when combined with the risk assessment allow risk management to be drafted.  But this risk management is further influenced by aspects such as actually identifying and accepting identification of a hazard, with the toxicity component and the extent of exposure to the hazard.  Add to that perceptions of private citizens, industry and manufacturing lobbying, and non-profit interest groups and you can see how hard it is to get reasonable public policy enacted for risks and hazards.

The extent of a low risk problem but high consequence hazard should it occur is also a factor that is hard to rationalize.   For instance, the probability of a nuclear core melt-down is 1 in 3704 (about once every decade).  That is actually a moderately low risk, but the fallout (pun intended) can be long lasting and felt globally.  We are still seeing the global effects of the Fukushima disaster from 2011, and locally the region in Japan may be unlivable for several decades or more.  To put it in a more Hollywood movie level: the odds of being hit by a cosmic body capable of causing global catastrophe is 1 in 1,600,000.  The likely hood is very low, but should it happen then massive ecological die-offs and end of human civilization is certain.  New data suggest that the start and end of the Early Dryas period (12,800 to 11,500 years before present) were characterized by two separate large cosmic impacts.  Maybe not as rare as simply killing off the Dinosaurs 65 million years ago that most people know about.  So, despite this Hollywood-esque example, we do need to ask whether a risk is acceptable regardless of the probability. 

Air pollution from the extraction and burning of fossil fuels is one of the largest risks we face today and ranks within the lifestyle choices (see third paragraph above) we make and accept every day.  There are numerous studies that clearly show that pollution from fossil-fuel combustion is the leading environmental threat to global health, yet, we accept it as a perceived consequence of our standard of living.  Some say we have no choice in accepting fossil fuels, but the reality is that we do make that choice either directly, or through complacency to economic forces that we accept.  We tend to feel powerless to make change for the better simply because many of the problems are literally global in nature.  So, what can we do?  Firstly, recognize that what makes the headlines is not usually the stuff that will likely hurt us.  Find out what does?   Ask not “is it safe,” but rather “how risky is it compared to other options.”  Know that ‘YOU DO HAVE CONTROL.’  Ask yourself, “Do I have control over the risk?” If so, minimize it and quit worrying what can’t be controlled or the minor risks of everyday living.  Your continual choices concerning lifestyle and technology depend on what you accept as the true risks you face.  Many risks can be minimized through reasonable regulations or through the financial pressures of your purse.  More on that in the next post.   

Why we should ALL support Renewable Energy 4 – Quality of Life and Health part 1 – Problems of Toxicology

Bring up the health concerns of extracting and burning fossil fuels and most people fall into one of four camps: Those who fear and experience health problems from pollution; those that deny or ignore health problems because they profit from fossil fuels; those that fear losing the conveniences of using fossil fuels; or those that simply do not know that there are health problems associated with fossil fuels.  Not to make light of the fourth camp, but they are by far the largest of the camps that seem to think the technologies that support our modern technological, consumer society are benign. 

A quick overview of how technology overall has inundated us ALL with pollution problems should help remedy this illusion of benignity from technologies that we take for granted.  There are over 100,000 synthetic chemicals on the market today, and very few have been thoroughly tested for any harmful effects.  A quick test on your attitude to being exposed to chemicals.  Whenever you buy any household product, think, “Is it safe?”  If after reading the list of ingredients you would happily put it next to the food on the kitchen counter then fine.  If you would be hesitant to place it anywhere near your food or even the children, then why is it in your house being used?  Some products with toxic ingredients and potential for harm need to be used, but how carefully are you using them? 

Of course, there are many kinds of toxicity, and for many, we do not have a choice about whether we are exposed to them or not, because they come to us from many sources.   We are all exposed to pollution and toxic chemicals via Industrial manufacturing that reach us through; consumer products; workplace products; medicines and medical materials; pesticides and fertilizers; and air, water and solid waste.  Think about all the places and ways you can be exposed to these chemicals: drinking water, the air you breathe, the food you eat, household and cosmetic products, medicinal chemicals, and workplace exposure, as well as multiple possibilities in all public areas where chemicals are used.  The range of toxicants are also unnerving:  carcinogens that cause cancer, mutagens that cause mutations in DNA, teratogens that cause birth defects in pregnant women, allergens that cause unnecessary immune responses, neurotoxins that damage the nervous system, and endocrine disruptors that interfere with hormones.  We might believe that government agencies like the FDA and EPA are protecting us – aren’t they?  They are charged with monitoring 75,000 industrial chemicals, but there is minimal funding to do this required testing – too many chemicals, and too little time, people, and resources.  Only 10% of chemicals on the market are thoroughly tested with less than 1% actually being government regulated.  And then only 2% are screened for carcinogens, mutagens, teratogens, and 0% are tested for endocrine, nervous, or immune effects. Because of the Clean Air and Clean Water Acts our air and water are somewhat regulated but the thresholds of how much pollution is allowed is determined in part by how effective the polluting industries have been in lobbying to deregulate such landmark acts.   

.Toxic and emissions drift inundate anyone that is in the air flow from the sources of pollution, whether this be farmers using pesticides on a field, smoke stacks from manufacturing or coal burning power plants, or emissions from resource extraction fields.  There is nowhere on Earth you can go now (including the polar regions) to escape these drifts, but the concentrations of toxics are clearly higher the closer you are to a source or if you are in the air flow corridors.  Water and wind have this propensity to move around the planet.  So, what pollution happens in China, for instance, can eventually make it into the rain and air over your home.   Besides river and groundwater pollution that makes it into your drinking water, there are numerous sources of your farm crops being polluted as well as the many chemicals that are added by the food industry to processed food!  But, wait.  Isn’t the food industry regulated?  Sort of.  The FDA cannot test everything so most times they leave the industry themselves to police the danger of any chemicals they use.  If any chemical is ‘Generally Recognized As Safe (GRAS)’ then it need not be tested.  Who determines GRAS?  Experts of course, most who work for or are funded by the industry itself.  I wish I was simply some radical trying to just scare you, but you can easily look up this information for yourself. 

It’s not as if chemicals that are used then just vanish after they enter the environment (or us).  There is the problem of persistence as well as drift.  Some pollutants are more long-lasting than others and can persist within the environment for many years, breaking down naturally – sometimes in to a more toxic a chemical than the original pollutant.  Pesticide/toxicant pollution drift can be found from the tropics to the arctic and accumulate within food webs.  What might start out as low innocuous concentrations of pollutions can, through the process of biomagnification (the concentration of toxins in an organism as a result of its ingesting other plants or animals in which the toxins are more widely disbursed), reach toxic concentrations in long-lived predators.  

Dose-response curves allow us to predict effects of higher doses.  By extrapolating the curve out to higher values, we can predict how toxic a substance may be to humans at various concentrations.  In most curves, response increases with dose.  But this is not always the case; the increase may not be linear.  With endocrine disruption for instance, toxic effects may increase even though the toxin concentration has decreased.  Now you might be asking, aren’t some people more sensitive than others to pollution?  Yes, that is true, but you don’t know who until they are affected!  Not all people are equal.  Sensitivity to toxicant can vary with sex, age, weight, etc.  Babies, older people, or those in poor health are more sensitive.  The type of exposure is important.  Acute is high exposure in short period of time and often be pinpointed to a specific source (e.g. a chemical splash or factory explosion).  The hardest to pinpoint is chronic exposures that occur from lower amounts over a long period of time.  To complicate the problems even more, many chemical substances may interact when combined together within the environment such that the mixes of toxicants may cause health effects greater than the sum of their individual effects.  These are called synergistic effects and pose a challenging problem for toxicologists since there is no way to test all possible combinations!  (And the environment contains complex mixtures of many toxicants.)

This blog post isn’t meant to scare you but just to make you aware of the need for caution and awareness, and not to simply accept things as they are just because some authority says “not to worry.”  There is a lot we can do, but it means making your voice heard, and joining it with others to set sensible and well-thought out regulations and restrictions for everyone’s health and benefit.   To Be Continued…  

Why we should ALL support Renewable Energy 3 – Electrical Generation and Transmission, and RTOs

Electrical energy has probably been the greatest single technology that has improved our Standard of Living.  Before I delve more into how this has affected our Quality of Life, an overview of how this system works is in order. 

Our electricity grid is where our food distribution system was before refrigeration” Elliott Negin.  Quite a profound statement since electrical energy is so central to our lives.  Despite the increase in wind and solar renewable energy systems, the electrical grid in this country is still based on a system that started to be put in place during the 1870s.  It is predicated on the use of centralized power plants that transmit electricity through lines across the country. One problem with transmitting electrical power is that it loses energy during the transmission.  Local lines lose around 4%, while high energy transmission on distances of 300 miles only lose about 2%.  Electrical power is transmitted between 155,000 – 765,000 volts depending on the distance to be covered.  Voltages are stepped up for transmission and stepped down in when the power reaches the area it is to be used (think multiple sub-stations).  The grid itself is fickle and complex, especially when one considers the difficulty of transmission coupled with trying to estimate supply and demand.  Small scale local power grids began as early as 1882 with the national grid established in 1938. 

The grid is not simply a system where electricity that is produced is simply dumped into the grid when it is produced for everyone to tap into as they need it.  The electricity has to be directed to a location.  The complexity of the national grid has increased since locally distributed power from rooftop solar panels and local wind generators have connected to the grid.  Local distributed systems are more variable in reliability compared to larger MW systems.  Yet, with increasing electrical demand the need to use local systems is essential and if managed through modern computerized technology using real-time data would allow a more resilient and flexible grid.   The idea of RTOs (Regional transmission Organizations) was introduced in a previous post (Why we should ALL support Renewable Energy 1) which is the realistic way that power is transmitted within the grid.    

Most energy utilities work on a triple goal – reliability of energy delivery, financial viability over the long term, and social and environmental responsibility.  To move beyond the current grid system will not be simple or cheap. Until the whole system is either extensively upgraded and modernized the potential for power failures within the grid system is a real problem.  There are two terms used to describe disruptive power interruptions – Brownouts and Blackouts.Most computer system need to be plugged into voltage smoothing devices (to ensure consistent voltage) that protect from voltage drops and spikes, the latter having the potential to destroy computerized systems.  

brownout is an intentional, or unintentional, drop in voltage within the electrical power supply delivery system. If comptrollers of the system see a potential load emergency, they will Intentional reduce power outputs (under-voltage) to prevent a power outage known as a blackout. When voltage is quickly restored, the resulting voltage spikes (over-voltage) can be quite damaging to unprotected computer components and data systems.

A blackout, however, is a complete interruption of power within a given service area.   Blackouts can occur without warning and last for extended periods.  Reasons for blackouts are usually caused by catastrophic equipment failure or severe weather.   While rare in the USA but more common in many less developed countries, rolling blackouts are controlled and usually preplanned interruptions of service. Power companies may deliberately cause rolling blackouts for numerous reasons, most often it is because the peak demands cannot be met by existing supply within a region. Rather than blackout a whole region for a longer period, the blackout is shared for short periods throughout the service area. To emphasize the fragility of the current aging electrical grid, the great 2003 blackout throughout Northeast North America (southeastern Canada and eight northeastern states) lasted for up to two days in some areas and affected more than 50 million people.  Apparently, some tree branches in Ohio interacting with high voltage power lines tripped emergency fuses that caused unexpected voltage spikes, which then created a cascade of system failures across the region.

RTO (Regional Transmission Organizations)

One of the ways that local utility companies manage power supply stability is through RTOs (operators that coordinate, control, and monitor multi-state electric grid systems), which can be a variety of direct and indirect brokerage systems, that sell and trade electricity through complex wholesale RTO market systems.  The complexity may ensure stable electrical supply, but also causes high fluctuations in market pricing that can be restrictive for smaller and rural community utilities who have minimal electrical energy generation systems they control. 

Remember the triple line goal (given above) of most utility companies is reliability of energy delivery, financial viability over the long term, and social and environmental responsibility.  The big problem with ensuring a stable supply through RTOs is that most local utilities have to place environmental responsibility subservient to supply stability and price viability.  This inevitably means that air quality suffers.  An electrical system on a grid does not differentiate where its electrons are derived – fossil fuel or renewable source.  You may live in an area that generates completely renewable electricity but if it is a part of an RTO, your utility may be competing for electrons, that when peak supply is needed, the electrons come from a fossil fuel source. Fossil fuel derived electricity comes with a high cost in air quality, especially in areas where they are generated or areas downwind of those generation systems.  The social and environmental costs are not shared.  Coupled with other fossil fuel problems from oil and gas extraction and the burning of gasoline and diesel, air quality can become quite harmful and even deadly for some.  The costs of using fossil fuels and the reduction in the quality of life, while trying to maintain a standard of living are the focus of the next post.                   

Why we should ALL support Renewable Energy 2 – SOL, QOL, and GDP

As I discuss why we need to rethink not just renewable energy, but how we live and consider moving into a sustainable world, there are some basic factors of how we measure the Country’s success that we truly need to understand. 

In the last post I said that we all confuse Quality of Life (QOL) and Standard of Living (SOL) when we think about the things we need in our lives.  SOL is the degree of monetary wealth that makes available material comforts to a person or community.  It is measured by Gross Domestic Product (GDP) per capita.  In other words, how much money is moving through a country’s economy in a given year relative to the population. It does not take into account the negative or positive aspects of how the money is used. GDP was conceived of in the mid-1600, but in 1934, FDR wanted some measure to know if the USA was doing better economically as it struggled out of the Great Depression era.  GDP up to that point was used as the measure simply because it could be measured.  BUT it was only a minor metric in a whole economic slew of other minor and ineffectual measures that really didn’t work in the big scheme of everyday life.  It simply measures the movement of money – it has no good or bad component to it – a new community center hires a lot of people and money moves.  Likewise, a terrible disaster creates a lot of aid help and rebuilding and money moves.  A million people dying in a terrible disaster also increases GDP even more so if they have to be nicely buried. War is particularly good at increasing GDP. Yet, since 1944 from the infamous Bretton Woods we have used it as the international measure of choice about comparisons on lifestyle and SOL.  How do you compare SOL when one country is dirt poor and another has people with lots of ‘stuff’ but are mired in debt in which most will remain for the remainder of their lives.   

QOL on the other hand is about the general well-being of individuals and societies, taking into account the negative and positive aspects of life.   It is less about the economy and more about life satisfaction, including everything from happiness, physical health, family, education, employment, wealth, safety, security to freedom, religious beliefs, and the quality of the environment.  A good SOL is good, but a better QOL is preferable!  If a QOL includes a good SOL, so much the better, as long as the SOL has all the attributes that create a good QOL. Having sat in many ridiculous traffic jams and read about the debilitating debt so many people live with, I see that the QOL in the U.S. is not as high as everyone thinks it is. If the amount of money were truly equal to QOL then people with more money should have the highest QOL.  Alas, this is not true.  People with more money feel more secure from financial threats, but other than that, they do not score any higher on any measure of QOL – indeed, in many cases they score lower because their whole world is tied up with financial worries and loss of community support.     

So, can we measure QOL?  Yes, we can, but it is a lot more finicky than simply measuring money moving.  Yet, QOL gives a much clearer picture of how well a country and its people are doing.  In 2019, the U.S. economy is supposedly doing well, but the majority of people in the country are struggling more than ever because this conclusion is based on a rising GDP.  Money is moving but it is siphoning up from a local economy we all live within to an elite economy.  Consider what happens when a Big Box store moves into town (especially in smaller rural towns) with localized incentives and promises of jobs.  Then economies of scale happen when low priced goods, that people didn’t know they wanted, flood the local economy.  The Box store forces local businesses out of business with that cannot compete with the low prices or the lower paying jobs.  Then with little competition the big box stores raise their prices and often move their store to just outside the community’s tax jurisdiction to avoid paying back to the local community.  The money made by the store then whistles back to a head office elsewhere that shelters its profits in overseas banks so the country gets no tax base either.  The rich investors/owners then engage in the ‘elite economy siphon’ to pump money from the local economy up to the elite economy where it gets invested in stocks and shares and other lucrative investments that do nothing for local economies.  

In 1981, the Reagan administration promoted the concept of ‘Trickle-Down’ to argue that allowing this elite siphon economy would benefit everyone as the ultra-wealthy invested their profits back into the U.S. economy.  The truth is that never happened and has never happened.  During the Golden Era of the Robber Barons (late 1800s) these Captains of Industry actually did invest some their massive profits back in to the U.S. business infrastructure.  The problem was that they ruthlessly ran monopolies that drove out smaller competition and formed ‘Trusts’ that were a collection of the largest Robber Barons within a similar industry that maintained such monopolies.  They were even more ruthless in how they treated employees.  One only has to read about the typical Robber Baron behavior of John D. Rockefeller at his coal mine in Ludlow, Colorado, to understand the brutality and working conditions of the masses of people at that time.  Obviously in the last century many laws have been passed to prevent monopolies from reforming and to enhance the working conditions of people.  Unfortunately, the elite economy has not been regulated as well, and the great gains that created the economic boom of post-WWII, allowing a large middle class, have since 1981, been eroding swiftly.  Yes, GDP has risen steadily since 1934, but while the country’s QOL rose as well for a while, it maxed out in 1957, and has been stagnant and has even decreasing in the last 30 years.                   

One country in the world decided to opt out of the simple international of using GDP/GNP and instead instituted metrics of Gross National Happiness (GNH) that focusses on QOL as measures of the country’s success.  Critics of Bhutan’s GNH measures cite the country’s political problems as evidence that GNH doesn’t work, as if our current economic measures (i.e. GDP) do work and somehow are evidence that we have harmonic political situations??  It’s like comparing Apples to Tomatoes and complaining that tomatoes do not grow in orchards.  The critics are thinking from the current system and cannot break out of the box of their rigid economic paradigms.  Are the Bhutanese rolling around in money because of the GNH?  No!  but, are they as depressed as many in the developed world?   Many sociologists and Anthropologists refer to the community aspects of these cultures as evidence that they are much more connected and hence ‘happier’ than we in the western world seem to be because of the cohesive kinds of community’s in which they live.  One common critique of the developed nations is something often referred to as ‘The disease of Isolation.’  This is referring to the individualistic nature of how we live – isolated from each other even with neighborhoods.  What most critics of Bhutan’s GNH so often keep missing is what the rest of the GNH metrics show.  We have no such metrics in our economy to compare, and so we don’t, merely using the GDP that equates to one of Bhutan’s eighth metric in the GNH.  (The nine GNH metrics are Psychological well-being; Time use; Community Vitality; Cultural Vitality, diversity and resilience; Human Health; Educations; Ecological diversity and resilience; Living Standard (the economic measures); and last, Good Governance.)      

To Be Continued……….

Why we should ALL support Renewable Energy 1 – QOL versus SOL, and new technology

The next series of posts are about broader aspects of why we all should support renewable energy and fuels and a different way of thinking that is as revolutionary and cost effective as the electricity and the automobile was to the lost world of the animal drawn wagon and whale oil lighting systems.

Do you believe that Climate change is human caused, or not?  It doesn’t really matter.  We all should be supporting renewable fuels for logical reasons that are beyond the geo-economic-political-scientific arguments that are causing the rifts between rational people and a better quality of life for everyone. Note, that I say Quality of Life (QOL) not Standard of Living (SOL) – we assume and confuse the two as being the same when each is vastly different. 

SOL is the degree of wealth and material comfort available to a person or community.  It is measured by Gross Domestic Product per capita.  In other words, how much money is moving through a country’s economy in a given year (this will be covered more in the next posting). QOL is the general well-being of individuals and societies, taking into account the negative and positive aspects of life.  It is less about the economy and more about life satisfaction, including everything from happiness, physical health, family, education, employment, wealth, safety, security to freedom, religious beliefs, and the quality of the environment.  A good SOL is good, but a better QOL is preferable!  If a QOL includes a good SOL, so much the better, as long as the SOL has all the attributes that create a good QOL!!!  Having sat in many ridiculous traffic jams and watched people struggle with finances, I note that sociological studies show that that the Quality of Life (QOL) in the U.S. is not as people would believe. If the amount of money were truly equal to QOL then people with more money should have the highest QOL.  Alas, this is not true.  People with more money feel more secure from financial threats, but other than that, they do not score any higher on any measure of QOL – indeed, in many cases they score lower because their whole world is tied up with financial worries and loss of community support.       

I also find that what people are determined to defend, quite passionately at times, is the technology they perceive is essential to standard of living in a modern industrial society. If we exclude the people who benefit directly from investments in fossil fuels, we find that people supporting fossil fuels do so from an ideological basis and not a factual one.  I was once in a discussion – actually I was talking but he was almost screaming at me – about fossil fuels and renewable energy options. The man kept saying that society and the economy would collapse without oil, coal, and gas fuels.  He kept going on about his problem of getting to work without gasoline to put in his car.  Like me he was old enough to recall the oil embargo and shortages of the 1970s and he feared a reoccurrence.  When he had calmed down, I asked him if he was in love with gasoline and the internal combustion engine, or was it more that he needed a vehicle to get to work in a reliable, efficient, convenient and cost-effective manner. Did he really care what happened behind the scenes when he flipped a light switch as long as the light or appliance came on? So many people seem ready to fight for fossil fuels when in fact what they really want is merely the technology and resource stability to maintain their lifestyle and move about with the ease that modern cars allow. If battery systems were more advanced in 1893, we would all be driving electric cars today and no one would be fighting for gasoline driven engines.  Back then we would have had coal fired, and hydro, electric generation, but batteries may have been a major storage factor even back then.  When we look at today’s problematic electrical grid system, the easiest solution using today’s options is the one most challenged – to use renewable forms of energy generation that readily lend themselves to localized sources.  More about the grid and Regional Transmission Organizations (RTOs) in another post.  

First, a little short story about greed and control and the electrical system we take for granted.  By 1900, the modern AC electrical grid was fast becoming the way of the worlds electrical supply.  There were two inventors vying for dominance in this new technology: Thomas Edison (General Electric) using his DC system and Nikola Tesla (Westinghouse) with his AC system.  Tesla was well ahead of the game and won the contract to electrify the lighting system at the World’s Columbian Exposition in Chicago in 1893. After that the AC system became the standard for electric utilities worldwide.  Now the big money guys got into the act. The first automobiles were electric and John D. Rockefeller was greatly concerned.  Not only was his oil monopoly profit being threatened by the electric grid (people used Kerosene derived from Oil, which had taken over from Whale oil before then) but electric cars would also remove gasoline as a potential fuel in the cars.  Rockefeller backed Henry Ford and created the gasoline driven internal combustion engine as the automotive standard. Tesla was still at the top of his game and his electricity genius was beginning to concern other money giants.  For a time, many of the leading financiers of the day vied with one another to invest in Tesla’s projects. Eventually the most important US banker of his generation, J.P. Morgan (notable financier for the Rothchild family), became Tesla’s exclusive backer during the period when he experimented most actively using wireless transmission rather than wires for conducting electrical current.  At this point the story becomes unclear.  Despite J.P. Morgan backing Tesla’s many inventions with a 51% share, it seems that Tesla was more concerned about providing humanity with cheap, even free, energy more than making money.  Almost overnight, Morgan, for whatever reason, pulled his support vilifying Tesla’s work as problematic.  The most popular and logical reason seems to be that Tesla’s potential wireless electrical system would be almost impossible (at that time) to meter usage for the buying and selling of electricity.  Tesla lived the rest of his life a broken and ruined man unable to continue his experiments.  Almost immediately after his death in 1943 all his research documentation was removed by U.S. government’s Office of Alien Property. What happened to it after that is the stuff of Hollywood movies.     

Over a century later we are still entrenched with the same system of producing electricity – we need something to spin a turbine that creates electrical AC energy.  Until relatively recently, we heated water (coal, oil, methane, nuclear decay, trash) to produce super-heated steam, or a kinetic water source (e.g. water moving downhill), that spins the turbine.  While micro-hydroelectric systems are now available (if you live near a running water source you’re allowed to use) the rest require a large-scale power plant, so we stuck with the grid system.  Or are we? 

The current technology exists for every house to be its own power generating system, which can then feed unused electricity back into a more localized grid for local businesses to use.  In classes I would show my students a Google image of the houses surrounding the university.  I asked them to notice the most wasted space in the picture that was soaking up sunlight – the roofs!  Imagine every house having solar panels (PVs) on the sunny side coupled with Solar Thermal panels, small wind generators, and below surface geothermal heat-pump systems connected to the house.  This means we could all be independent of grid electrical needs.  It has already been shown to be doable.  I had a friend who built his house off the grid in Evergreen, Colorado, and for the next 20 years that he lived there, never paid a utility bill. His water was from a well with a solar powered pump. He also had a leach field so no sewage costs either.  His house costs to build were $1.05 cent per square foot compared to the average $1.25 square foot for all the other ‘regular homes’ around him. There are so many ways to do this kind of system with current technology.  The only drawback?  People resistant to thinking differently!  Economists also have a say emphasizing its expense to changing the whole system.  What they neglect to show is how scale of use reduces prices needed for investing in this idea when building a home.  Before all the naysayers rush out to point out a minor problem, let me point out that all the technologies require some form of manufacturing that in itself can be a polluting part of the system through mining of necessary minerals.  I admit it is not perfect, but compared to the highly polluting fossil fuels that we burn ALL the time, it is a step in a better direction, because once in place they are a non-polluting source of electricity for a long time. The pollution aspect is the one I hear least about when people are arguing about getting beyond fossil fuels.  The reliability and economic aspects are always the first and foremost arguments, but quality of life gets lost in the arguments.  More about that in the next posting.

Alternative Transportation modes – New Technology High Speed Transport Systems

In the USA, the only fast way to get across the country fast is by air, which can be trying and tedious at times, especially when bad weather exists in some major part of the country.  Weather can quickly disrupt air service all across the country because of many planes having multiple legs on any flight.  (i.e. weather might be fine where you are but the weather where it is coming from is adverse.)  While aircraft can fly through bad weather, taking off and landing requires reasonable conditions.  This form of transportation is also one of the most polluting options and has adverse consequences for the quality and behavior of our atmosphere as well.  Trains have been covered already at some length in this blog but what is the state of the art for this form of transport.  High speed rail would work well for at least two express (220 mph) West-East corridors across the States and several North-South routes.   This is similar in Europe with its already well-established high-speed routes that are updating all the time.  

At this time, there are only four high speed routes in the USA.  The New York to Washington line yet with only 83 mph average speed, the Los Angeles to San Francisco still under construction and unlikely to be finished soon, the soon to be started Houston to Northern Texas line, and the newly proposed Charlotte to Atlanta line. This highly anticipated Texas line would allow people to avoid the deadly I-45 corridor.  The Texas train would resemble the Japanese Shinkansen system.  Notably, the Shinkansen has never had an any passenger injuries because of the train, not during the monstrous Earthquake of 2011 and even with two minor derailment incidents during its long service history.  It is expected that the Texas line will be running by 2026.  

One of the greatest problems with new technology is simply that – it is new and means changing how we look at the way we transport ourselves and our goods around.  The change to new technologies is like the change from horse and cart to the train and subsequently to the automobile and on to air travel.   Technology can be exciting and at the same time make many of us fear the unknown changes.       

    Hyperloop System

The Maglev, also discussed earlier, is poised to become a reality.  Elon Musk proposed an experimental Hyperloop Transport Technology (HTT) in 2012 but hasn’t gone beyond experimentation yet.  What makes the HTT different is the that the Maglev run within a low-vacuum sealed tube to all but eliminate air resistance.  The work done by Richard Branson’s Virgin company (Hyperloop 1) is showing more promise and is already for scaling up from the half kilometer track to longer track for final testing.  Over 10 places in high population density areas (10 chosen out of 2600 requests, 4 in the USA, one on Canada, one in Mexico, two in the UK and two in India) have been selected to work alongside Virgin technologies as the optimum places to build the prototypes of this transport technology.  Hyperloop one will travel at speeds of 760 mph with May 2021 scheduled for its first real run.  The advantages of Hyperloop one is that it can run from downtown city areas, with the sealed tube (above ground or even underground) completely unaffected by weather.  Of course, the tube would have escape areas if for some reason it stopped within the tube.  The tube could also be sealed to prevent vacuum loss in case of an unlikely breach of the vacuum system where the worst that could happen would be friction slowing down of the train.  The option for commuting on this HTT would allow people to live distally from where they work for the same commute time that they currently experience in city traffic.  One other advantage of ground based high-speed transport systems is that they are mainly unaffected by things like volcanic eruptions as occurred over Europe in 2010 when the Icelandic volcano Eyjafjallajökull disrupted air traffic across the Atlantic and Europe for over a month.  Now imagine an advanced ultra-efficient vacuum hyperloop that can travel more than 4000 mph.  While technologically possible now, the devil is in the details.  Yet, the possibility in travelling through underground tube systems at such speeds around the world is no longer science fiction but something that may be realized in the not too distant future.     

               Hovercraft (Air Cushion Vehicle)

One well known and used technology that is little discussed as a form of rapid and especially freight transportation is that using Hover technology.  In 1955 Christopher Cockerell ran the first hovercraft using a vacuum cleaner engine to create the lift.  This technology remained essentially unchanged for 50 year and because of the noisy engines were the singular reason for their lack of mainstream use.  The British successfully used them as car ferries for nearly 50 years.  New technology engines, especially with electric engines, are now allowing them to come back as an option again.  Hovercraft that are still used travel 4-6 times faster than boat ferries and 2-3 times faster than catamarans or hydrofoils.   

               The greatest advantage for hovercraft is that they can cover almost any terrain without surface preparation (e.g. mud flats, estuaries, rivers, oceans, snow and ice surfaces) and indeed this is their greatest use at this time, especially with rescue craft and military craft needing to move from aquatic to terrestrial surfaces and vice versa).  Like trains, however, they are grade restricted – can’t be too steep for the engines to push uphill or slow downhill.  Hovercraft are almost unaffected by weather conditions although heavy storms on the ocean would still need to be ridden out or avoided like any modern ocean shipping.  The reduced freight handling needs would also make hovercraft more cost effective and efficient.  Imagine loading up a mega-hovercraft in Denver bound for China.  It could take off towards California along hover paths (marked throughways that prevent these crafts going off across country) until they reach the coast.  Then after customs inspections, the craft could simply slide down the ramp into the ocean and then run full speed (up to 150 mph) across the pacific arriving a mere 43 hours later in Shanghai gliding up on the land with the same cargo on board.  No road, bridge, or rail building or any related maintenance, and only hover throughways to negotiate while on land.   

Alternative Transportation modes – The Light Rail and Urban Commuter Rail Systems.

The Regional Transportation District of Denver (RTD – affectionately known as The Ride), currently runs 124 local, 16 express, 16 regional, 16 limited, 8 SkyRide, and several special services bus lines, but also 8 light rail lines and an additional 3 commuter rail lines with 71 stations and 88 miles of track.  It first opened October 7, 1994.  There are 3 commuter rail lines reaching out from Union Station to DIA, Westminster, and Wheat Ridge (AB & D lines), with 8 Light rail lines (CDEFHLR & W lines) radiating from central Denver out to the suburbs.  If you visit most large U.S. cities you will probably find a similar situation – Light/commuter rail running through the Greater city.  Often the new light rail is merely establishing along the old tram systems that once were the norm in the late 1800s through the early 1900s.  (Recall from an earlier post that Ford and Rockefeller were pivotal in removing the mass transit systems in the U.S. to make way for cars and trucks.)   

One of the problems for the Front Range is that while Denver mass transit is growing, the rest of the towns from Cheyenne to Trinidad have sparse transit options – bus or train.  Future projections of the RTD commuter rail show it eventually running from Fort Collins to Colorado Springs, but the projections are more like decades in the future not mere years.  Many towns in the Front Range daily see the Heavy Train with industrial rail cars running north and south from beyond Cheyenne to beyond Raton in New Mexico.  What should be remembered is that this rail-line also used to be part of the passenger rail system as well.  When you drive I-70 from Cheyenne to Fort Collins you will notice that the heavy rail runs close to the freeway much of the way.  Now the controllers of the heavy rail system do not particularly like having to use freight trains through all of the small towns between Fort Collins and Pueblo, but that is the rail line they have – on Mason Street in Fort Collins the rail lines are literally in the middle of the street.  It takes a lot of energy to move a heavy rail train and it is inefficient and a nuisance to have to slow down or stop such a heavy vehicle. They would love to have a rail that runs east of the city, say following I-470 where many easements are already established.  But that line would need to be built.  The key here is cost. As an example, to build the light rail line up to Fort Collins would be $10-20 mile with much eminent domain (compulsory purchase) acquisition of private land to create the light rail route.  To build a heavy rail line as just described above would be more like $1-5 million per mile.  If this were the new Heavy rail line, then the current heavy rail line would be freed up at no extra cost and the commuter rail could begin immediately once the heavy rail line switched tracks it was using.  All that would be needed would be passenger parking and Stations – think of the old Loveland Depot on Railroad Avenue and 4th Street to understand where and how the commuter rail would run.  And with only 5 passenger cars instead of the 120 freight ones, the wait at rail crossing would be minimized.

Pros and Cons of mass transit commuter and light rail

Pros – Trains are more energy efficient than road vehicles, produce much less air pollution that cars, require less land than roads and parking areas required for automotive traffic and substantially reduce traffic congestion, especially at peak rush times.  Trains are a relatively safe form of transportation, causing almost no injuries and deaths compared to auto traffic.  The cost of running trains is about 90% of that to run a bus system.  Studies have also found that Transit systems, such as light rail, induce investment and development in an area in which they run, because industry sectors have a greater incentive to locate near transit corridors.  Property values have also been found to increase near transit corridors.  For example, the knowledge and computer based industries in Silicon valley located there in part because of the proximity of mass transit systems. 

Cons – Rail systems can be expensive to build and are only really cost effective along high population corridors – the front range fits this requirement, although parts of other cities need to be assessed whether rail or bus is the most viable option.  Most transit systems need city, state, or federal subsidies of some kind and ridership can vary depending on the price of gasoline.  Light rail is more prone to this problem than commuter rail.  Riders are committed to transportation schedules even if the cost is cheaper.  Although not a major problem, rail lines can cause noise and vibration for people living along rail corridors.  Areas where light/commuter rail systems are being constructed for some reason suffer endemic delays and cost overruns.  If there is a problem on a rail line the track is blocked because you cannot reroute a train like you can a bus.           

Whenever I travel east down I-70 from the mountains I am always amazed at the massive amounts of traffic and how much of it is front range traffic returning from playing in the mountains.  The adding of a very expensive ($70 million) 12 miles of express flow lane has helped a little, but anyone who still travels that route at any time of year still experiences the log-jam of traffic that are still a part of the I-70 mountain rush period experience.  For many years the idea of a Monorail from Denver to Summit County has been debated.  The monorail could travel at more than 100 mph, making stops at several stations between Denver and Eagle, with buses scheduled for the remaining short trips up side valleys to A-Basin, Keystone, Breckenridge and Minturn, and Mountain Stations at all the towns along the route between Denver and Eagle. (There would be the option of continuing it all the way to Glenwood Springs as well.)  No Traffic, fast access to the mountains, no parking problems, and the ability to relax and socialize while the trail does all the work of getting you to your destinations.  Designers say it could carry as many as 10,000 passengers an hour in each direction and cost about $25-30 million per mile to build (half the cost of two more I-70 lanes), meaning it would move nearly 8 times as many people as those extra lanes would do.  The biggest problem apparently is that it would only serve the needs of 90% of the people, something that some influential Colorado business and political leaders feel particularly strong about.