Saturday 19 2024

Medicare vs. Medicare Advantage: sales pitches are often from biased sources, the choices can be overwhelming and impartial help is not equally available to all

It can take a lot of effort to understand the many different Medicare choices. Halfpoint Images/Moment via Getty Images
Grace McCormack, University of Southern California and Melissa Garrido, Boston University

The 67 million Americans eligible for Medicare make an important decision every October: Should they make changes in their Medicare health insurance plans for the next calendar year?

The decision is complicated. Medicare has an enormous variety of coverage options, with large and varying implications for people’s health and finances, both as beneficiaries and taxpayers. And the decision is consequential – some choices lock beneficiaries out of traditional Medicare.

Beneficiaries choose an insurance plan when they turn 65 or become eligible based on qualifying chronic conditions or disabilities. After the initial sign-up, most beneficiaries can make changes only during the open enrollment period each fall.

The 2024 open enrollment period, which runs from Oct. 15 to Dec. 7, marks an opportunity to reassess options. Given the complicated nature of Medicare and the scarcity of unbiased advisers, however, finding reliable information and understanding the options available can be challenging.

We are health care policy experts who study Medicare, and even we find it complicated. One of us recently helped a relative enroll in Medicare for the first time. She’s healthy, has access to health insurance through her employer and doesn’t regularly take prescription drugs. Even in this straightforward scenario, the number of choices were overwhelming.

The stakes of these choices are even higher for people managing multiple chronic conditions. There is help available for beneficiaries, but we have found that there is considerable room for improvement – especially in making help available for everyone who needs it.

The choice is complex, especially when you are signing up for the first time and if you are eligible for both Medicare and Medicaid. Insurers often engage in aggressive and sometimes deceptive advertising and outreach through brokers and agents. Choose unbiased resources to guide you through the process, like www.shiphelp.org. Make sure to start before your 65th birthday for initial sign-up, look out for yearly plan changes, and start well before the Dec. 7 deadline for any plan changes.

2 paths with many decisions

Within Medicare, beneficiaries have a choice between two very different programs. They can enroll in either traditional Medicare, which is administered by the government, or one of the Medicare Advantage plans offered by private insurance companies.

Within each program are dozens of further choices.

Traditional Medicare is a nationally uniform cost-sharing plan for medical services that allows people to choose their providers for most types of medical care, usually without prior authorization. Deductibles for 2024 are US$1,632 for hospital costs and $240 for outpatient and medical costs. Patients also have to chip in starting on Day 61 for a hospital stay and Day 21 for a skilled nursing facility stay. This percentage is known as coinsurance. After the yearly deductible, Medicare pays 80% of outpatient and medical costs, leaving the person with a 20% copayment. Traditional Medicare’s basic plan, known as Part A and Part B, also has no out-of-pocket maximum.

Pen, glasses and medicare health insurance card
Traditional Medicare starts with Medicare parts A and B. Bill Oxford/iStock via Getty Images

People enrolled in traditional Medicare can also purchase supplemental coverage from a private insurance company, known as Part D, for drugs. And they can purchase supplemental coverage, known as Medigap, to lower or eliminate their deductibles, coinsurance and copayments, cap costs for Parts A and B, and add an emergency foreign travel benefit.

Part D plans cover prescription drug costs for about $0 to $100 a month. People with lower incomes may get extra financial help by signing up for the Medicare program Part D Extra Help or state-sponsored pharmaceutical assistance programs.

There are 10 standardized Medigap plans, also known as Medicare supplement plans. Depending on the plan, and the person’s gender, location and smoking status, Medigap typically costs from about $30 to $400 a month when a beneficiary first enrolls in Medicare.

The Medicare Advantage program allows private insurers to bundle everything together and offers many enrollment options. Compared with traditional Medicare, Medicare Advantage plans typically offer lower out-of-pocket costs. They often bundle supplemental coverage for hearing, vision and dental, which is not part of traditional Medicare.

But Medicare Advantage plans also limit provider networks, meaning that people who are enrolled in them can see only certain providers without paying extra. In comparison to traditional Medicare, Medicare Advantage enrollees on average go to lower-quality hospitals, nursing facilities, and home health agencies but see higher-quality primary care doctors.

Medicare Advantage plans also often require prior authorization – often for important services such as stays at skilled nursing facilities, home health services and dialysis.

Choice overload

Understanding the tradeoffs between premiums, health care access and out-of-pocket health care costs can be overwhelming.

Graphic of a person flow lines pointing to text boxes on either side that have smaller arrows to more text boxes holding plan choice descriptions.
Turning 65 begins the process of taking one of two major paths, which each have a thicket of health care choices. Rika Kanaoka/USC Schaeffer Center for Health Policy & Economics

Though options vary by county, the typical Medicare beneficiary can choose between as many as 10 Medigap plans and 21 standalone Part D plans, or an average of 43 Medicare Advantage plans. People who are eligible for both Medicare and Medicaid, or have certain chronic conditions, or are in a long-term care facility have additional types of Medicare Advantage plans known as Special Needs Plans to choose among.

Medicare Advantage plans can vary in terms of networks, benefits and use of prior authorization.

Different Medicare Advantage plans have varying and large impacts on enrollee health, including dramatic differences in mortality rates. Researchers found a 16% difference per year between the best and worst Medicare Advantage plans, meaning that for every 100 people in the worst plans who die within a year, they would expect only 84 people to die within that year if all had been enrolled in the best plans instead. They also found plans that cost more had lower mortality rates, but plans that had higher federal quality ratings – known as “star ratings” – did not necessarily have lower mortality rates.

The quality of different Medicare Advantage plans, however, can be difficult for potential enrollees to assess. The federal plan finder website lists available plans and publishes a quality rating of one to five stars for each plan. But in practice, these star ratings don’t necessarily correspond to better enrollee experiences or meaningful differences in quality.

Online provider networks can also contain errors or include providers who are no longer seeing new patients, making it hard for people to choose plans that give them access to the providers they prefer.

While many Medicare Advantage plans boast about their supplemental benefits , such as vision and dental coverage, it’s often difficult to understand how generous this supplemental coverage is. For instance, while most Medicare Advantage plans offer supplemental dental benefits, cost-sharing and coverage can vary. Some plans don’t cover services such as extractions and endodontics, which includes root canals. Most plans that cover these more extensive dental services require some combination of coinsurance, copayments and annual limits.

Even when information is fully available, mistakes are likely.

Part D beneficiaries often fail to accurately evaluate premiums and expected out-of-pocket costs when making their enrollment decisions. Past work suggests that many beneficiaries have difficulty processing the proliferation of options. A person’s relationship with health care providers, financial situation and preferences are key considerations. The consequences of enrolling in one plan or another can be difficult to determine.

The trap: Locked out

At 65, when most beneficiaries first enroll in Medicare, federal regulations guarantee that anyone can get Medigap coverage. During this initial sign-up, beneficiaries can’t be charged a higher premium based on their health.

Older Americans who enroll in a Medicare Advantage plan but then want to switch back to traditional Medicare after more than a year has passed lose that guarantee. This can effectively lock them out of enrolling in supplemental Medigap insurance, making the initial decision a one-way street.

For the initial sign-up, Medigap plans are “guaranteed issue,” meaning the plan must cover preexisting health conditions without a waiting period and must allow anyone to enroll, regardless of health. They also must be “community rated,” meaning that the cost of a plan can’t rise because of age or illness, although it can go up due to other factors such as inflation.

People who enroll in traditional Medicare and a supplemental Medigap plan at 65 can expect to continue paying community-rated premiums as long as they remain enrolled, regardless of what happens to their health.

In most states, however, people who switch from Medicare Advantage to traditional Medicare don’t have as many protections. Most state regulations permit plans to deny coverage, impose waiting periods or charge higher Medigap premiums based on their expected health costs. Only Connecticut, Maine, Massachusetts and New York guarantee that people can get Medigap plans after the initial sign-up period.

Deceptive advertising

Information about Medicare coverage and assistance choosing a plan is available but varies in quality and completeness. Older Americans are bombarded with ads for Medicare Advantage plans that they may not be eligible for and that include misleading statements about benefits.

A November 2022 report from the U.S. Senate Committee on Finance found deceptive and aggressive sales and marketing tactics, including mailed brochures that implied government endorsement, telemarketers who called up to 20 times a day, and salespeople who approached older adults in the grocery store to ask about their insurance coverage.

The Department of Health and Human Services tightened rules for 2024, requiring third-party marketers to include federal resources about Medicare, including the website and toll-free phone number, and limiting the number of contacts from marketers.

Although the government has the authority to review marketing materials, enforcement is partially dependent on whether complaints are filed. Complaints can be filed with the federal government’s Senior Medicare Patrol, a federally funded program that prevents and addresses unethical Medicare activities.

Meanwhile, the number of people enrolled in Medicare Advantage plans has grown rapidly, doubling since 2010 and accounting for more than half of all Medicare beneficiaries by 2023.

Nearly one-third of Medicare beneficiaries seek information from an insurance broker. Brokers sell health insurance plans from multiple companies. However, because they receive payment from plans in exchange for sales, and because they are unlikely to sell every option, a plan recommended by a broker may not meet a person’s needs.

Help is out there − but falls short

An alternative source of information is the federal government. It offers three sources of information to assist people with choosing one of these plans: 1-800-Medicare, medicare.gov and the State Health Insurance Assistance Program, also known as SHIP.

The SHIP program combats misleading Medicare advertising and deceptive brokers by connecting eligible Americans with counselors by phone or in person to help them choose plans. Many people say they prefer meeting in person with a counselor over phone or internet support. SHIP staff say they often help people understand what’s in Medicare Advantage ads and disenroll from plans they were directed to by brokers.

Telephone SHIP services are available nationally, but one of us and our colleagues have found that in-person SHIP services are not available in some areas. We tabulated areas by ZIP code in 27 states and found that although more than half of the locations had a SHIP site within the county, areas without a SHIP site included a larger proportion of people with low incomes.

Virtual services are an option that’s particularly useful in rural areas and for people with limited mobility or little access to transportation, but they require online access. Virtual and in-person services, where both a beneficiary and a counselor can look at the same computer screen, are especially useful for looking through complex coverage options.

We also interviewed SHIP counselors and coordinators from across the U.S.

As one SHIP coordinator noted, many people are not aware of all their coverage options. For instance, one beneficiary told a coordinator, “I’ve been on Medicaid and I’m aging out of Medicaid. And I don’t have a lot of money. And now I have to pay for my insurance?” As it turned out, the beneficiary was eligible for both Medicaid and Medicare because of their income, and so had to pay less than they thought.

The interviews made clear that many people are not aware that Medicare Advantage ads and insurance brokers may be biased. One counselor said, “There’s a lot of backing (beneficiaries) off the ledge, if you will, thanks to those TV commercials.”

Many SHIP staff counselors said they would benefit from additional training on coverage options, including for people who are eligible for both Medicare and Medicaid. The SHIP program relies heavily on volunteers, and there is often greater demand for services than the available volunteers can offer. Additional counselors would help meet needs for complex coverage decisions.

The key to making a good Medicare coverage decision is to use the help available and weigh your costs, access to health providers, current health and medication needs, and also consider how your health and medication needs might change as time goes on.

This article is part of an occasional series examining the U.S. Medicare system.

This story has been updated to remove a graphic that contained incorrect information about SHIP locations, and to correct the date of the open enrollment period.

Grace McCormack, Postdoctoral researcher of Health Policy and Economics, University of Southern California and Melissa Garrido, Research Professor, Health Law, Policy & Management, Boston University

This article is republished from The Conversation under a Creative Commons license. 

How You Benefit from Serving on a Search Committee

Search committee interviewing a candidate

There are many professional development opportunities in higher education, but one that is not often considered is the chance to serve on a search committee. This work is overlooked because the goal of the search is to find the best talent for an open position instead of improving the evaluators themselves. The candidates are the ones who stand to benefit from a potential job offer, or at least by improving their interviewing skills and "getting their name out there."

Search committee members have less self-interest. They are volunteers doing what needs to be done for the good of the institution. Someone has to do it. If they are faculty, they might get credit for service work. Staff might benefit from helping ensure the quality of their next colleague or supervisor. But the work of the search committee is rarely considered career-enhancing to anyone except who they hire.

This perception should change.

Search committee members have a lot to gain, according to Christopher D. Lee, managing director with Storbeck Search. In his 13 years as the chief human resources officer for the Virginia Community College System, Lee served on more than 30 search committees to hire presidents at 23 institutions.

"I've earned a master's degree in leadership from having watched more than 300 people interview to be a community college president," said Lee, who has authored books and trained HR professionals related to the search process. "I've listened to experienced leaders talk about their abilities and what they would do in situations, and I've taken notes. I could write a book about leadership, so I know you can benefit by serving on a search committee."

You don't have to serve on a presidential search committee to benefit or even limit your professional development to hearing executives pontificate about leadership. Here are six areas of professional development that arise from serving on any type of search committee at your institution:

  1. Choosing candidates. Developing an eye for talent is a valuable skill in higher education. According to Daniel Grassian, author of "An Insider's Guide to University Administration," one of the most important decisions (if not the most important decision) administrators make is who to hire. Serving on a search committee trains you to make good hiring decisions and helps administrators at your institution make their choices. When they look good, you look good -- that goes for candidates you recommend and the people who rely on you to help them make their critical decisions.
  2. Project management. There are a lot of moving pieces when it comes to a search, from scheduling interviews to organizing the evaluation tools and rubrics. Being a part of this process will sharpen your skills that can be applied to other content areas or give you ideas and background knowledge for future searches that you might be asked to lead.
  3. Idea generation. Lee has always considered the interview process as receiving free consulting from multiple experts. Asking candidates questions about how they would solve real problems or improve an organization will provide you fresh ideas and perspectives from different vantage points that will help you do your job better. This doesn't mean stealing proprietary information but rather taking the combination of ideas from candidates to advance your thinking.
  4. Domain knowledge. If you serve on a committee from another academic department or functional area, you can learn a lot about how organizations operate from conducting a search. You learn about the issues facing administrators from other institutions or industries, or what academics in different disciplines value when it comes to things like tenure requirements or teaching methods. This can help you stay relevant in your career by understanding what's important to other professionals and potential collaborators. If anything, seeing how others work and think will inform your practice and recognize the distinctive qualities of your own discipline.
  5. Networking. Serving on a search committee will give you more access and help you engage with professionals who you would not otherwise interact with, whether that's fellow committee members from within your institution or outside candidates and their references. When you work together on a common goal to hire someone, that creates a special bond and builds relationships, even with the successful candidate once they become your co-worker. Sharing your thoughts or exhibiting your skills during the process also enhances your influence with colleagues as someone they can trust.
  6. Improve your candidacy. After serving on a search committee, you will be better prepared for the next time you apply or interview for a job. You will be exposed to conversations, reasonings, and what committees consider most when evaluating candidates. By reviewing and comparing dozens of applications, you'll have a better understanding of what makes a resume or CV effective and what doesn't (and what little time is spent reading each application during the initial screening). You'll be better prepared to answer interview questions and articulate a point from having sat on the other side of the table.

Once you appreciate the benefits of serving on a search committee, you can position yourself to be selected for one. This might be easy. Volunteers for this type of service are likely in greater demand than spots available. The further up the organizational chart, the more competitive it will become.

Having previous experience on search committees will help your chances, but don't let your lack of knowledge about the role that your institution is attempting to fill prevent you from contributing to the search.

"To have a good search committee, you need to have a novice, someone who's not a member of the key area," Lee said. "That person usually asks the basic question of 'Why?' People who are in that domain have made a whole bunch of assumptions, paradigms, and expected ways of seeing the world. The novice disrupts that."

According to Lee, if there are five members from the same department, they will be prone to make poorer decisions and perpetuate biases and preferences for the same type of candidates. Having a novice ask questions prevents groupthink, introduces diverse viewpoints, and provides clarity of purpose.

"It's always value-added because people will say, 'Oh, I've never thought about it that way,'" Lee said.

There is a caveat when it comes to faculty searches. Especially in a unionized environment, academic departments might prevent or not prefer outsiders from serving on their search committees. While the autonomy to hire their own faculty can be considered a healthy application of academic freedom, this protection can be taken too far if it diminishes a core principle of critical thinking and considering other views. But generally speaking, higher education is an industry that embraces diversity of thought, especially in its hiring practices, and you're more likely to be welcomed on a committee than not.

Ask your supervisor or contact your human resources to express your intentions to serve on search committees and note your previous experience or novice status so that you will be on the short list of possible search committee selections. Explain to your supervisor the benefits for the institutions to have thoroughly staffed searches and to have representation from your department, as well as your own professional development. Mention the benefits listed above, minus the "improve your candidacy" claim, of course.

You will find that by helping your employer conduct a search, you will be better positioned to be the one who is sought after for career advancement.

HigherEdJobs

This article is republished from HigherEdJobs® under a Creative Commons license. 

by Justin Zackal

Sustainable building effort reaches new heights with wooden skyscrapers

Wood engineered for strength and safety offers architects an alternative to carbon-intensive steel and concrete

At the University of Toronto, just across the street from the football stadium, workers are putting up a 14-story building with space for classrooms and faculty offices. What’s unusual is how they’re building it — by bolting together giant beams, columns and panels made of manufactured slabs of wood.

As each wood element is delivered by flatbed, a tall crane lifts it into place and holds it in position while workers attach it with metal connectors. In its half-finished state, the building resembles flat-pack furniture in the process of being assembled.

The tower uses a new technology called mass timber. In this kind of construction, massive, manufactured wood elements that can extend more than half the length of a football field replace steel beams and concrete. Though still relatively uncommon, it is growing in popularity and beginning to pop up in skylines around the world.

Today, the tallest mass timber building is the 25-story Ascent skyscraper in Milwaukee, completed in 2022. As of that year, there were 84 mass timber buildings eight stories or higher either built or under construction worldwide, with another 55 proposed. Seventy percent of the existing and future buildings were in Europe, about 20 percent in North America and the rest in Australia and Asia, according to a report from the Council on Tall Buildings and Urban Habitat. When you include smaller buildings, at least 1,700 mass timber buildings had been constructed in the United States alone as of 2023.

Mass timber is an appealing alternative to energy-intensive concrete and steel, which together account for almost 15 percent of global carbon dioxide emissions. Though experts are still debating mass timber’s role in fighting climate change, many are betting it’s better for the environment than current approaches to construction. It relies on wood, after all, a renewable resource.

Mass timber also offers a different aesthetic that can make a building feel special. “People get sick and tired of steel and concrete,” says Ted Kesik, a building scientist at the University of Toronto’s Mass Timber Institute, which promotes mass timber research and development. With its warm, soothing appearance and natural variations, timber can be more visually pleasing. “People actually enjoy looking at wood.”

Same wood, stronger structure

Using wood for big buildings isn’t new, of course. Industrialization in the 18th and 19th centuries led to a demand for large factories and warehouses, which were often “brick and beam” construction — a frame of heavy wooden beams supporting exterior brick walls.

As buildings became ever taller, though, builders turned to concrete and steel for support. Wood construction became mostly limited to houses and other small buildings made from the standard-sized “dimensional” lumber you see stacked at Home Depot.

But about 30 years ago, builders in Germany and Austria began experimenting with techniques for making massive wood elements out of this readily available lumber. They used nails, dowels and glue to combine smaller pieces of wood into big, strong and solid masses that don’t require cutting down large old-growth trees.

Engineers including Julius Natterer, a German engineer based in Switzerland, pioneered new methods for building with the materials. And architects including Austria’s Hermann Kaufmann began gaining attention for mass timber projects, including the Ölzbündt apartments in Austria, completed in 1997, and Brock Commons, an 18-story student residence at the University of British Columbia, completed in 2017.

In principle, mass timber is like plywood but on a much larger scale: The smaller pieces are layered and glued together under pressure in large specialized presses. Today, beams up to 50 meters long, usually made of what’s called glue-laminated timber, or glulam, can replace steel elements. Panels up to 50 centimeters thick, typically cross-laminated timber, or CLT, replace concrete for walls and floors.

These wood composites can be surprisingly strong — stronger than steel by weight. But a mass timber element must be bulkier to achieve that same strength. As a building gets higher, the wooden supports must get thicker; at some point, they simply take up too much space. So for taller mass timber buildings, including the Ascent skyscraper, architects often turn to a combination of wood, steel and concrete.

Historically, one of the most obvious concerns with using mass timber for tall buildings was fire safety. Until recently, many building codes limited wood construction to low-rise buildings.

Though they don’t have to be completely fireproof, buildings need to resist collapse long enough to give firefighters a chance to bring the flames under control, and for occupants to get out. Materials used in conventional skyscrapers, for instance, are required to maintain their integrity in a fire for three hours or more.

To demonstrate mass timber’s fire resistance, engineers put the wood elements in gas-fired chambers and monitor their integrity. Other tests set fire to mock-ups of mass timber buildings and record the results.

These tests have gradually convinced regulators and customers that mass timber can resist burning long enough to be fire safe. That’s partly because a layer of char tends to form early on the outside of the timber, insulating the interior from much of the fire’s heat.

Mass timber got a major stamp of approval in 2021, when the International Code Council changed the International Building Code, which serves as a model for jurisdictions around the world, to allow mass timber construction up to 18 stories tall. With this change, more and more localities are expected to update their codes to routinely allow tall mass timber buildings, rather than requiring them to get special approvals.

There are other challenges, though. “Moisture is the real problem, not fire,” says Steffen Lehmann, an architect and scholar of urban sustainability at the University of Nevada, Las Vegas.

All buildings must control moisture, but it’s absolutely crucial for mass timber. Wet wood is vulnerable to deterioration from fungus and insects like termites. Builders are careful to prevent the wood from getting wet during transportation and construction, and they deploy a comprehensive moisture management plan, including designing heat and ventilation systems to keep moisture from accumulating. For extra protection from insects, wood can be treated with chemical pesticides or surrounded by mesh or other physical barriers where it meets the ground.

Another problem is acoustics, since wood transmits sound so well. Designers use sound insulation materials, leave space between walls and install raised floors, among other methods.

Potential upsides of mass timber

Combating global warming means reducing greenhouse gas emissions from the building sector, which is responsible for 39 percent of emissions globally. Diana Ãœrge-Vorsatz, an environmental scientist at the Central European University in Vienna, says mass timber and other bio-based materials could be an important part of that effort.

In a 2020 paper in the Annual Review of Environment and Resources, she and colleagues cite an estimate from the lumber industry that the 18-story Brock Commons, in British Columbia, avoided the equivalent of 2,432 metric tons of CO2 emissions compared with a similar building of concrete and steel. Of those savings, 679 tons came from the fact that less greenhouse gas emissions are generated in the manufacture of wood versus concrete and steel. Another 1,753 metric tons of CO2 equivalent were locked away in the building’s wood.

“If you use bio-based material, we have a double win,” Ãœrge-Vorsatz says.

But a lot of the current enthusiasm over mass timber’s climate benefits is based on some big assumptions. The accounting often assumes, for instance, that any wood used in a mass timber building will be replaced by the growth of new trees, and that those new trees will take the same amount of CO2 out of the atmosphere across time. But if old-growth trees are replaced with new tree plantations, the new trees may never reach the same size as the original trees, some environmental groups argue. There are also concerns that increasing demand for wood could lead to more deforestation and less land for food production.

Studies also tend to assume that once the wood is in a building, the carbon is locked up for good. But not all the wood from a felled tree ends up in the finished product. Branches, roots and lumber mill waste may decompose or get burned. And when the building is torn down, if the wood ends up in a landfill, the carbon can find its way out in the form of methane and other emissions.

“A lot of architects are scratching their heads,” says Stephanie Carlisle, an architect and environmental researcher at the nonprofit Carbon Leadership Forum, wondering whether mass timber always has a net benefit. “Is that real?” She believes climate benefits do exist. But she says understanding the extent of those benefits will require more research.

In the meantime, mass timber is at the forefront of a whole different model of construction called integrated design. In traditional construction, an architect designs a building first and then multiple firms are hired to handle different parts of the construction, from laying the foundation, to building the frame, to installing the ventilation system and so on.

In integrated design, says Kesik, the design phase is much more detailed and involves the various firms from the beginning. The way different components will fit and work together is figured out in advance. Exact sizes and shapes of elements are predetermined, and holes can even be pre-drilled for attachment points. That means many of the components can be manufactured off-site, often with advanced computer-controlled machinery.

A lot of architects like this because it gives them more control over the building elements. And because so much of the work is done in advance, the buildings tend to go up faster on-site — up to 40 percent faster than other buildings, Lehmann says.

Mass timber buildings tend to be manufactured more like automobiles, Kesik says, with all the separate pieces shipped to a final location for assembly. “When the mass timber building shows up on-site, it’s really just like an oversized piece of Ikea furniture,” he says. “Everything sort of goes together.”

To make nuclear fusion a reliable energy source one day, scientists will first need to design heat- and radiation-resilient materials

Kinguin: Everybody plays.
A fusion experiment ran so hot that the wall materials facing the plasma retained defects. Christophe Roux/CEA IRFM, CC BY
Sophie Blondel, University of Tennessee

Fusion energy has the potential to be an effective clean energy source, as its reactions generate incredibly large amounts of energy. Fusion reactors aim to reproduce on Earth what happens in the core of the Sun, where very light elements merge and release energy in the process. Engineers can harness this energy to heat water and generate electricity through a steam turbine, but the path to fusion isn’t completely straightforward.

Controlled nuclear fusion has several advantages over other power sources for generating electricity. For one, the fusion reaction itself doesn’t produce any carbon dioxide. There is no risk of meltdown, and the reaction doesn’t generate any long-lived radioactive waste.

I’m a nuclear engineer who studies materials that scientists could use in fusion reactors. Fusion takes place at incredibly high temperatures. So to one day make fusion a feasible energy source, reactors will need to be built with materials that can survive the heat and irradiation generated by fusion reactions.

Fusion material challenges

Several types of elements can merge during a fusion reaction. The one most scientists prefer is deuterium plus tritium. These two elements have the highest likelihood of fusing at temperatures that a reactor can maintain. This reaction generates a helium atom and a neutron, which carries most of the energy from the reaction.

Humans have successfully generated fusion reactions on Earth since 1952 – some even in their garage. But the trick now is to make it worth it. You need to get more energy out of the process than you put in to initiate the reaction.

Fusion reactions happen in a very hot plasma, which is a state of matter similar to gas but made of charged particles. The plasma needs to stay extremely hot – over 100 million degrees Celsius – and condensed for the duration of the reaction.

To keep the plasma hot and condensed and create a reaction that can keep going, you need special materials making up the reactor walls. You also need a cheap and reliable source of fuel.

While deuterium is very common and obtained from water, tritium is very rare. A 1-gigawatt fusion reactor is expected to burn 56 kilograms of tritium annually. But the world has only about 25 kilograms of tritium commercially available.

Researchers need to find alternative sources for tritium before fusion energy can get off the ground. One option is to have each reactor generating its own tritium through a system called the breeding blanket.

The breeding blanket makes up the first layer of the plasma chamber walls and contains lithium that reacts with the neutrons generated in the fusion reaction to produce tritium. The blanket also converts the energy carried by these neutrons to heat.

The fusion reaction chamber at ITER will electrify the plasma.

Fusion devices also need a divertor, which extracts the heat and ash produced in the reaction. The divertor helps keep the reactions going for longer.

These materials will be exposed to unprecedented levels of heat and particle bombardment. And there aren’t currently any experimental facilities to reproduce these conditions and test materials in a real-world scenario. So, the focus of my research is to bridge this gap using models and computer simulations.

From the atom to full device

My colleagues and I work on producing tools that can predict how the materials in a fusion reactor erode, and how their properties change when they are exposed to extreme heat and lots of particle radiation.

As they get irradiated, defects can form and grow in these materials, which affect how well they react to heat and stress. In the future, we hope that government agencies and private companies can use these tools to design fusion power plants.

Our approach, called multiscale modeling, consists of looking at the physics in these materials over different time and length scales with a range of computational models.

We first study the phenomena happening in these materials at the atomic scale through accurate but expensive simulations. For instance, one simulation might examine how hydrogen moves within a material during irradiation.

From these simulations, we look at properties such as diffusivity, which tells us how much the hydrogen can spread throughout the material.

We can integrate the information from these atomic level simulations into less expensive simulations, which look at how the materials react at a larger scale. These larger-scale simulations are less expensive because they model the materials as a continuum instead of considering every single atom.

The atomic-scale simulations could take weeks to run on a supercomputer, while the continuum one will take only a few hours.

All this modeling work happening on computers is then compared with experimental results obtained in laboratories.

For example, if one side of the material has hydrogen gas, we want to know how much hydrogen leaks to the other side of the material. If the model and the experimental results match, we can have confidence in the model and use it to predict the behavior of the same material under the conditions we would expect in a fusion device.

If they don’t match, we go back to the atomic-scale simulations to investigate what we missed.

Additionally, we can couple the larger-scale material model to plasma models. These models can tell us which parts of a fusion reactor will be the hottest or have the most particle bombardment. From there, we can evaluate more scenarios.

For instance, if too much hydrogen leaks through the material during the operation of the fusion reactor, we could recommend making the material thicker in certain places, or adding something to trap the hydrogen.

Designing new materials

As the quest for commercial fusion energy continues, scientists will need to engineer more resilient materials. The field of possibilities is daunting – engineers can manufacture multiple elements together in many ways.

You could combine two elements to create a new material, but how do you know what the right proportion is of each element? And what if you want to try mixing five or more elements together? It would take way too long to try to run our simulations for all of these possibilities.

Thankfully, artificial intelligence is here to assist. By combining experimental and simulation results, analytical AI can recommend combinations that are most likely to have the properties we’re looking for, such as heat and stress resistance.

The aim is to reduce the number of materials that an engineer would have to produce and test experimentally to save time and money.

Sophie Blondel, Research Assistant Professor of Nuclear Engineering, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. 

Kinguin: Everybody plays.

Candidate experience matters in elections, but not the way you think

Previously holding political office is an obvious advantage for candidates seeking votes. SDI Productions/E+/Getty Images
Charlie Hunt, Boise State University

Ever since he was chosen as Donald Trump’s running mate back in July, U.S. Sen. JD Vance, a Republican from Ohio, has come under a level of scrutiny typical for a vice presidential candidate, including for some of his eyebrow-raising public statements made in the past or during the campaign.

One line of critique has persisted through the news cycles: that his lack of political experience may make Vance less qualified than others, including his opponent, Gov. Tim Walz of Minnesota, to be vice president.

Do more politically experienced politicians have advantages in elections? And if they enjoyed such advantages in the past, do they still in such a polarized political moment?

The answers are complicated, but political science offers some clues.

Why experience should matter

Previously holding political office, and for a longer period of time, is in some ways an obvious advantage for candidates making the case to potential voters. If you were applying for a job as an attorney, previous legal experience would be favorably looked upon by an employer. The same is true in elections: If you want to run for office, experience as an officeholder could help you perform better at the job you’re asking for.

This approach has been taken by a number of high-profile politicians over the years. For example, in Hillary Clinton’s first campaign for president in 2008, the U.S. senator from New York and future secretary of state made “strength and experience” the centerpiece of her argument to the voters.

Experience also might matter for the same reasons as incumbency – that is, when a candidate is currently holding the office they are seeking in an election. Incumbents typically have much higher name recognition than their challenger opponents, distinct fundraising advantages and, at least in theory, a record of policy achievement on which to base their campaigns. Even for nonincumbents, these advantages are more prevalent for previous officeholders rather than someone who is a newcomer to politics.

A man in a dark suit waving from a stage in the night with a woman and two girls next to him.
Barack Obama and his family on Nov. 4, 2008, the day he won the presidential election, showing that a lack of political experience can be used as a benefit. Emmanuel Dunand/AFP via Getty Images

Inexperienced, or an ‘outsider’?

But Hillary Clinton was, of course, unsuccessful in her first bid for the Democratic presidential nomination in 2008. She was beaten by a relatively inexperienced candidate named Barack Obama; like Vance, Obama had served less than a full term in the Senate before running for higher office.

Obama’s 2008 win shows that a lack of political experience can be leveraged as a benefit.

One of the few things Obama and Donald Trump have in common is that both benefited from an appeal to voters as a political “outsider” in elections in which Americans were frustrated with the political status quo. As outsiders, they appeared uniquely positioned to fix what voters believed was wrong with politics.

Does experience equal ‘quality’?

The “outsider” label isn’t always a ticket to victory.

In 2020, for example, voters were frustrated with the chaos of having a political outsider in the White House and turned to Joe Biden – possibly the most experienced presidential candidate in modern history at that point, with eight years as vice president and several decades in the Senate under his belt. Voters were hungry for political normalcy in the White House and made that choice for Biden.

A bearded man in a blue suit and tie talking into a microphone.
Does U.S. Sen. JD Vance’s lack of political experience make him less qualified than his opponent, Gov. Tim Walz of Minnesota, to be vice president? Scott Olson/Getty Images

Political science has other important lessons about when experience matters and when it doesn’t. In Congress, electoral challengers – those running against incumbents – enjoy more of a boost from prior experience in places such as the state legislature. In fact, the typical indicator for challenger “quality” used in political science research is a simple marker of whether the challenger has prior political experience.

But even this finding is more complicated than it seems: Political scientists such as Jeffrey Lazarus have found that high-quality – that is, politically experienced – challengers do better in part because they are more strategic in waiting for better opportunities to run in winnable races.

Experience matters only sometimes – and maybe less than ever

The usefulness of a lengthy political resume also depends on which stage of the election candidates are in.

Research has found, for example, that a candidate’s experience matters much more in settings such as party primaries, where differences between the candidates on policy issues are typically much narrower. That leaves nonpolicy differences such as experience to play a bigger role.

In the general election, voters supportive of one party are unlikely to factor candidate experience in that heavily, even, or especially, when the candidate they support lacks it.

The political science phenomenon known as negative partisanship means that, more and more, voters are motivated not by positive attributes of their own party’s candidates but rather by the fear of losing to the other side. This has only been exacerbated as the two parties have polarized further.

Voters are therefore more willing than ever to lower the standards they might have for their favored candidates’ resumes if it means beating the other side. Even if a Democrat is clearly more qualified than a Republican in terms of political experience, that advantage is unlikely to sway many Republican voters, and vice versa.

What about 2024?

In 2024, the experience factor is complicated. Trump, of course, has been president before – the ultimate prior experience for someone running for exactly that office.

But he has continued to run as an outsider from the political establishment, casting Kamala Harris – who, as vice president, has little actual institutional power – as an incumbent who is responsible for the current state of the country. Since polls show consistently that a majority of Americans believe the country is not headed in the right direction, we can see why Trump might try to frame the race in this way.

Whether Trump’s strategy ends up working will be more apparent after the election is over. For now, Trump and Harris can rest assured that most of their supporters don’t appear to care how much – or how little – experience they have.

Charlie Hunt, Assistant Professor of Political Science, Boise State University

This article is republished from The Conversation under a Creative Commons license. 

Wednesday 28 2024

New forms of steel for stronger, lighter cars

Automakers are tweaking production processes to create a slew of new steels with just the right properties, allowing them to build cars that are both safer and more fuel-efficient

Like many useful innovations, it seems, the creation of high-quality steel by Indian metallurgists more than two thousand years ago may have been a happy confluence of clever workmanship and dumb luck.

Firing chunks of iron with charcoal in a special clay container produced something completely new, which the Indians called wootz. Roman armies were soon wielding wootz steel swords to terrify and subdue the wild, hairy tribes of ancient Europe.

Twenty-four centuries later, automakers are relying on electric arc furnaces, hot stamping machines and quenching and partitioning processes that the ancients could never have imagined. These approaches are yielding new ways to tune steel to protect soft human bodies when vehicles crash into each other, as they inevitably do — while curbing car weights to reduce their deleterious impact on the planet.

“It is a revolution,” says Alan Taub, a University of Michigan engineering professor with many years in the industry. The new steels, dozens of varieties and counting, combined with lightweight polymers and carbon fiber-spun interiors and underbodies, hark back to the heady days at the start of the last century when, he says, “Detroit was Silicon Valley.”

Such materials can reduce the weight of a vehicle by hundreds of pounds — and every pound of excess weight that is shed saves roughly $3 in fuel costs over the lifetime of the car, so the economics are hard to deny. The new maxim, Taub says, is “the right material in the right place.”

The transition to battery-powered vehicles underscores the importance of these new materials. Electric vehicles may not belch pollution, but they are heavy — the Volvo XC40 Recharge, for example, is 33 percent heavier than the gas version (and would be heavier still if the steel surrounding passengers were as bulky as it used to be). Heavy can be dangerous.

“Safety, especially when it comes to new transportation policies and new technologies, cannot be overlooked,” Jennifer Homendy, chief of the National Transportation Safety Board, told the Transportation Research Board in 2023. Plus, reducing the weight of an electric vehicle by 10 percent delivers roughly 14 percent improvement in range.

As recently as the 1960s, the steel cage around passengers was made of what automakers call soft steel. The armor from Detroit’s Jurassic period was not much different from what Henry Ford had introduced decades earlier. It was heavy and there was a lot of it.

With the 1965 publication of Ralph Nader’s Unsafe at Any Speed: The Designed-In Dangers of the American Automobile, big automakers realized they could no longer pursue speed and performance exclusively. The oil embargos of the 1970s only hastened the pace of change: Auto steel now had to be both stronger and lighter, requiring less fuel to push around.

In response, over the past 60 years, like chefs operating a sous vide machine to produce the perfect bite, steelmakers — their cookers arc furnaces reaching thousands of degrees Fahrenheit, with robots doing the cooking — have created a vast variety of steels to match every need. There are high-strength, hardened steels for the chassis; corrosion-resistant stainless steels for side panels and roofs; and highly stretchable metals in bumpers to absorb impacts without crumpling.

Tricks with the steel

Most steel is more than 98 percent iron. It is the other couple of percent — sometimes only hundredths of a single percent, in the case of metals added to confer desired properties — that make the difference. Just as important are treatment methods: the heating, cooling and processing, such as rolling the sheets prior to forming parts. Modifying each, sometimes by only seconds, changes the metal’s structure to yield different properties. “It’s all about playing tricks with the steel,” says John Speer, director of the Advanced Steel Processing and Products Research Center at the Colorado School of Mines.

At the most basic level, the properties of steel are about microstructure: the arrangement of different types, or phases, of steel in the metal. Some phases are harder, while others confer ductility, a measure of how much the metal can be bent and twisted out of shape without shearing and creating jagged edges that penetrate and tear squishy human bodies. At the atomic level, there are principally four phases of auto steel, including the hardest yet most brittle, called martensite, and the more ductile austenite. Carmakers can vary these by manipulating the times and temperatures of the heating process to produce the properties they want.

Academic researchers and steelmakers, working closely with automakers, have developed three generations of what is now called advanced high-strength steel. The first, adopted in the 1990s and still widely employed, had a good combination of strength and ductility. A second generation used more exotic alloys to achieve even greater ductility, but those steels proved expensive and challenging to manufacture.

The third generation, which Speer says is beginning to make its way onto the factory floor, uses heating and cooling techniques to produce steels that are stronger and more formable than the first generation; nearly ten times as strong as common steels of the past; and much cheaper (though less ductile) than second-generation steels.

Steelmakers have learned that cooling time is a critical factor in creating the final arrangements of atoms and therefore the properties of the steel. The most rapid cooling, known as quenching, freezes and stabilizes the internal structure before it undergoes further change during the hours or days it could otherwise take to reach room temperature.

One of the strongest types of modern auto steel — used in the most critical structural components, such as side panels and pillars — is made by superheating the metal with boron and manganese to a temperature above 850 degrees Celsius. After becoming malleable, the steel is transferred within 10 seconds to a die, or form, where the part is shaped and rapidly cooled.

In one version of what is known as transformation-induced plasticity, the steel is heated to a high temperature, cooled to a lower temperature and held there for a time and then rapidly quenched. This produces islands of austenite surrounded by a matrix of softer ferrite, with regions of harder bainite and martensite. This steel can absorb a large amount of energy without fracturing, making it useful in bumpers and pillars.

Recipes can be further tweaked by the use of various alloys. Henry Ford was employing alloys of steel and vanadium more than a century ago to improve the performance of steel in his Model T, and alloy recipes continue to improve today. One modern example of the use of lighter metals in combination with steel is the Ford Motor Company’s aluminum-intensive F-150 truck, the 2015 version weighing nearly 700 pounds less than the previous model.

A process used in conjunction with new materials is tube hydroforming, in which a metal is bent into complex shapes by the high-pressure injection of water or other fluids into a tube, expanding it into the shape of a surrounding die. This allows parts to be made without welding two halves together, saving time and money. A Corvette aluminum frame rail, the largest hydroformed part in the world, saved 20 percent in mass from the steel rail it replaced, according to Taub, who coauthored a 2019 article on automotive lightweighting in the Annual Review of Materials Research.

New alloys

More recent introductions are alloys such as those using titanium and particularly niobium, which increase strength by stabilizing a metal’s microstructure. In a 2022 paper, Speer called the introduction of niobium “one of the most important physical metallurgy developments of the 20th century.”

One tool now shortening the distance between trial and error is the computer. “The idea is to use the computer to develop materials faster than through experimentation,” Speer says. New ideas can now be tested down to the atomic level without workmen bending over a bench or firing up a furnace.

The ever-continuing search for better materials and processes led engineer Raymond Boeman and colleagues to found the Institute for Advanced Composites Manufacturing Innovation (IACMI) in 2015, with a $70 million federal grant. Also known as the Composites Institute, it is a place where industry can develop, test and scale up new processes and products.

“The field is evolving in a lot of ways,” says Boeman, who now directs the institute’s research on upscaling these processes. IACMI has been working on finding more climate-friendly replacements for conventional plastics such as the widely used polypropylene. In 1960, less than 100 pounds of plastic were incorporated into the typical vehicle. By 2017, the figure had risen to nearly 350 pounds, because plastic is cheap to make and has a high strength-to-weight ratio, making it ideal for automakers trying to save on weight.

By 2019, according to Taub, 10-15 percent of a typical vehicle was made of polymers and composites, everything from seat components to trunks, door parts and dashboards. And when those cars reach the end of their lives, their plastic and other difficult-to-recycle materials known as automotive shredder residue, 5 million tons of it, ends up in landfills — or, worse, in the wider environment.

Researchers are working hard to develop stronger, lighter and more environmentally friendly plastics. At the same time, new carbon fiber products are enabling these lightweight materials to be used even in load-bearing places such as structural underbody parts, further reducing the amount of heavy metal used in auto bodies.

Clearly, work remains to make autos less of a threat, both to human bodies and the planet those bodies travel over every day, to work and play. But Taub says he is optimistic about Detroit’s future and the industry’s ability to solve the problems that came with the end of the horse-and-buggy days. “I tell students they will have job security for a long time.”

Knowable Magazine

In a new era of campus upheaval, the 1970 Kent State shootings show the danger of deploying troops to crush legal protests

Ohio National Guard soldiers move in on war protesters at Kent State University on May 4, 1970. AP Photo
Brian VanDeMark, United States Naval Academy

Republican presidential candidate Donald Trump has expressed his intention, if elected to a second term, to use the U.S. armed forces to suppress domestic protests. The New York Times reports that Trump’s allies are marshaling legal arguments to justify using National Guard or active-duty military troops for crowd control.

Moreover, as the Times notes, Trump has asserted that if he returns to the White House, he will dispatch such forces without waiting for state or local officials to request such assistance.

I am a historian who has written several books about the Vietnam War, one of the most divisive episodes in our nation’s past. My new book, “Kent State: An American Tragedy,” examines a historic clash on May 4, 1970, between anti-war protesters and National Guard troops at Kent State University in Ohio.

The confrontation escalated into violence: Troops opened fire on the demonstrators, killing four students and wounding nine others, including one who was paralyzed for life.

In my view, the prospect of dispatching troops in the way that Trump proposes chillingly echoes actions that led up to the Kent State shootings. Some active-duty units, as well as National Guard troops, are trained today to respond to riots and violent protests – but their primary mission is still to fight, kill, and win wars.

Archival footage from CBS News of the clash between campus anti-war protesters and Ohio National Guard troops at Kent State University on May 4, 1970.

Federalizing the Guard

The National Guard is a force of state militias under the command of governors. It can be federalized by the president during times of national emergency or for deployment on combat missions overseas. Guardsmen train for one weekend per month and two weeks every summer.

Typically, the Guard has been deployed to deal with natural disasters and support local police responses to urban unrest, such as riots in Detroit in 1967, Washington in 1968, Los Angeles in 1965 and 1992, and Minneapolis and other cities in 2020 after the death of George Floyd.

The 1807 Insurrection Act grants presidents authority to use active-duty troops or National Guard forces to restore order within the United States. However, presidents rarely deploy Guard troops without state governors’ consent.

The main modern exceptions occurred during the Civil Rights Movement, when Southern governors resisted federal orders to desegregate schools in Arkansas, Mississippi and Alabama. In each case, the troops were sent to protect Black students from crowds of white protesters.

The standoff at Kent State

The war in Vietnam had grown increasingly unpopular by early 1970, but protests intensified on April 30 when President Richard Nixon authorized expanding the conflict into Cambodia. At Kent State, after a noontime anti-war rally on campus on May 1, alcohol-fueled students harassed passing motorists in town and smashed storefront windows that night. On May 2, anti-war protesters set fire to the building where military officers trained Kent State students enrolled in the armed forces’ Reserve Officer Training Corps program.

In response, Republican Gov. Jim Rhodes dispatched National Guard troops, against the advice of university and many local officials, who understood the mood in the town of Kent and on campus far better than Rhodes did. County prosecutor Ron Kane had vehemently warned Rhodes that deploying the National Guard could spark conflict and lead to fatalities.

Nonetheless, Rhodes – who was trailing in an impending Republican primary for a U.S. Senate seat – struck the pose of a take-charge leader who wasn’t going to be pushed around by a long-haired rabble. “We’re going to put a stop to this!” he shouted, pounding the table at a press conference in Kent on May 3.

Hundreds of National Guard troops were deployed across town and on campus. University officials announced that further rallies were banned. Nonetheless, on May 4, some 2,000 to 3,000 students gathered on the campus Commons for another anti-war rally. They were met by 96 National Guardsmen, led by eight officers.

There was an edge of confrontation in the air as student anger over Nixon’s expansion of the war blended with resentment over the Guard’s presence. Protesters chanted antiwar slogans, shouted epithets at the Guardsmen and made obscene gestures.

Doug Guthrie, a student at Kent State in 1970, looks back 54 years later at the events of May 4.

‘Fire in the air!’

The Guardsmen sent to Kent State had no training in de-escalating tension or minimizing the use of force. Nonetheless, their commanding officer that day, Ohio Army National Guard Assistant Adjutant General Robert Canterbury, decided to use them to break up what the Department of Justice later deemed a legal assembly.

In my view, it was a reckless judgment that inflamed an already volatile situation. Students started showering the greatly outnumbered Guardsmen with rocks and other objects. In violation of Ohio Army National Guard regulations, Canterbury neglected to warn the students that the Guardsmens’ rifles were loaded with live ammunition.

As tension mounted, Canterbury failed to adequately supervise his increasingly fearful troops – a cardinal responsibility of the commanding officer on the scene. This fundamental failure of leadership increased confusion and resulted in a breakdown of fire control discipline – officers’ responsibility to maintain tight control over their troops’ discharge of weapons.

When protesters neared the Guardsmen, platoon sergeant Mathew McManus shouted “Fire in the air!” in a desperate attempt to prevent bloodshed. McManus intended for troops to shoot above the students’ heads to warn them off. But some Guardsmen, wearing gas masks that made it hard to hear amid the noise and confusion, only heard or reacted to the first word of McManus’ order, and fired at the students.

The troops had not been trained to fire warning shots, which was contrary to National Guard regulations. And McManus had no authority to issue an order to fire if officers were nearby, as they were.

Many National Guardsmen who were at Kent State on May 4 later questioned why they had been deployed there. “Loaded rifles and fixed bayonets are pretty harsh solutions for students exercising free speech on an American campus,” one of them told an oral history interviewer. Another plaintively asked me in a 2023 interview, “Why would you put soldiers trained to kill on a university campus to serve a police function?”

A fighting force

National Guard equipment and training have improved significantly in the decades since Kent State. But Guardsmen are still troops who are fundamentally trained to fight, not to control crowds. In 2020, then-National Guard Bureau Chief General Joseph Lengyel told reporters that “the civil unrest mission is one of the most difficult and dangerous missions … in our domestic portfolio.”

In my view, the tragedy of Kent State shows how critical it is for authorities to be thoughtful in responding to protests, and extremely cautious in deploying military troops to deal with them. Force is inherently unpredictable, often uncontrollable, and can lead to fatal mistakes and lasting human suffering. And while protests sometimes break rules, they may not be disruptive or harmful enough to merit responding with force.

Aggressive displays of force often heighten tensions and worsen situations. Conversely, research shows that if protesters perceive authorities are behaving with restraint and treating them with respect, they are more likely to remain nonviolent. The shooting at Kent State demonstrates why force should be an absolute last resort in dealing with protests – and one fraught with grave risks.The Conversation

Brian VanDeMark, Professor of History, United States Naval Academy

This article is republished from The Conversation under a Creative Commons license. 

Wednesday 14 2024

Crispy Grilled Chicken with a Kick

Those first school bells may be ringing, but they don’t have to signal the end of grilling season. This Blackened Spatchcock Chicken keeps the meat moist, tender and tasty with crispy skin and a spicy seasoning to keep your summer spirit alive. Visit Culinary.net to find more recipes that keep your grill lit all year long.

Blackened Spatchcock Chicken

  • 1 whole chicken
  • 1 cup melted butter or ghee
  • 2 tablespoons heat-and-sweet seasoning
  • 1/2 tablespoon garlic powder
  • salt, to taste
  • pepper, to taste
  1. Heat grill to 375-400 F.
  2. Use kitchen shears or knife to remove backbone from chicken to lay flat. Remove rib cage, if desired, or push flat with hands.
  3. Mix butter, heat-and-sweet seasoning and garlic powder. Using meat injector, inject mixture into chicken. Rub remaining buttered seasoning over chicken and season with salt and pepper, to taste.
  4. Place spatchcock chicken breast-side up over indirect heat and cook 35-40 minutes.
  5. When internal temperature reaches 145 F, flip chicken breast-side down over direct heat 5 minutes, or until internal temperature reaches 165 F.
  6. Let rest 10 minutes before serving.

 

SOURCE:
Culinary.net