Saturday 19 2024

To make nuclear fusion a reliable energy source one day, scientists will first need to design heat- and radiation-resilient materials

Kinguin: Everybody plays.
A fusion experiment ran so hot that the wall materials facing the plasma retained defects. Christophe Roux/CEA IRFM, CC BY
Sophie Blondel, University of Tennessee

Fusion energy has the potential to be an effective clean energy source, as its reactions generate incredibly large amounts of energy. Fusion reactors aim to reproduce on Earth what happens in the core of the Sun, where very light elements merge and release energy in the process. Engineers can harness this energy to heat water and generate electricity through a steam turbine, but the path to fusion isn’t completely straightforward.

Controlled nuclear fusion has several advantages over other power sources for generating electricity. For one, the fusion reaction itself doesn’t produce any carbon dioxide. There is no risk of meltdown, and the reaction doesn’t generate any long-lived radioactive waste.

I’m a nuclear engineer who studies materials that scientists could use in fusion reactors. Fusion takes place at incredibly high temperatures. So to one day make fusion a feasible energy source, reactors will need to be built with materials that can survive the heat and irradiation generated by fusion reactions.

Fusion material challenges

Several types of elements can merge during a fusion reaction. The one most scientists prefer is deuterium plus tritium. These two elements have the highest likelihood of fusing at temperatures that a reactor can maintain. This reaction generates a helium atom and a neutron, which carries most of the energy from the reaction.

Humans have successfully generated fusion reactions on Earth since 1952 – some even in their garage. But the trick now is to make it worth it. You need to get more energy out of the process than you put in to initiate the reaction.

Fusion reactions happen in a very hot plasma, which is a state of matter similar to gas but made of charged particles. The plasma needs to stay extremely hot – over 100 million degrees Celsius – and condensed for the duration of the reaction.

To keep the plasma hot and condensed and create a reaction that can keep going, you need special materials making up the reactor walls. You also need a cheap and reliable source of fuel.

While deuterium is very common and obtained from water, tritium is very rare. A 1-gigawatt fusion reactor is expected to burn 56 kilograms of tritium annually. But the world has only about 25 kilograms of tritium commercially available.

Researchers need to find alternative sources for tritium before fusion energy can get off the ground. One option is to have each reactor generating its own tritium through a system called the breeding blanket.

The breeding blanket makes up the first layer of the plasma chamber walls and contains lithium that reacts with the neutrons generated in the fusion reaction to produce tritium. The blanket also converts the energy carried by these neutrons to heat.

The fusion reaction chamber at ITER will electrify the plasma.

Fusion devices also need a divertor, which extracts the heat and ash produced in the reaction. The divertor helps keep the reactions going for longer.

These materials will be exposed to unprecedented levels of heat and particle bombardment. And there aren’t currently any experimental facilities to reproduce these conditions and test materials in a real-world scenario. So, the focus of my research is to bridge this gap using models and computer simulations.

From the atom to full device

My colleagues and I work on producing tools that can predict how the materials in a fusion reactor erode, and how their properties change when they are exposed to extreme heat and lots of particle radiation.

As they get irradiated, defects can form and grow in these materials, which affect how well they react to heat and stress. In the future, we hope that government agencies and private companies can use these tools to design fusion power plants.

Our approach, called multiscale modeling, consists of looking at the physics in these materials over different time and length scales with a range of computational models.

We first study the phenomena happening in these materials at the atomic scale through accurate but expensive simulations. For instance, one simulation might examine how hydrogen moves within a material during irradiation.

From these simulations, we look at properties such as diffusivity, which tells us how much the hydrogen can spread throughout the material.

We can integrate the information from these atomic level simulations into less expensive simulations, which look at how the materials react at a larger scale. These larger-scale simulations are less expensive because they model the materials as a continuum instead of considering every single atom.

The atomic-scale simulations could take weeks to run on a supercomputer, while the continuum one will take only a few hours.

All this modeling work happening on computers is then compared with experimental results obtained in laboratories.

For example, if one side of the material has hydrogen gas, we want to know how much hydrogen leaks to the other side of the material. If the model and the experimental results match, we can have confidence in the model and use it to predict the behavior of the same material under the conditions we would expect in a fusion device.

If they don’t match, we go back to the atomic-scale simulations to investigate what we missed.

Additionally, we can couple the larger-scale material model to plasma models. These models can tell us which parts of a fusion reactor will be the hottest or have the most particle bombardment. From there, we can evaluate more scenarios.

For instance, if too much hydrogen leaks through the material during the operation of the fusion reactor, we could recommend making the material thicker in certain places, or adding something to trap the hydrogen.

Designing new materials

As the quest for commercial fusion energy continues, scientists will need to engineer more resilient materials. The field of possibilities is daunting – engineers can manufacture multiple elements together in many ways.

You could combine two elements to create a new material, but how do you know what the right proportion is of each element? And what if you want to try mixing five or more elements together? It would take way too long to try to run our simulations for all of these possibilities.

Thankfully, artificial intelligence is here to assist. By combining experimental and simulation results, analytical AI can recommend combinations that are most likely to have the properties we’re looking for, such as heat and stress resistance.

The aim is to reduce the number of materials that an engineer would have to produce and test experimentally to save time and money.

Sophie Blondel, Research Assistant Professor of Nuclear Engineering, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. 

Kinguin: Everybody plays.

Wednesday 28 2024

New forms of steel for stronger, lighter cars

Automakers are tweaking production processes to create a slew of new steels with just the right properties, allowing them to build cars that are both safer and more fuel-efficient

Like many useful innovations, it seems, the creation of high-quality steel by Indian metallurgists more than two thousand years ago may have been a happy confluence of clever workmanship and dumb luck.

Firing chunks of iron with charcoal in a special clay container produced something completely new, which the Indians called wootz. Roman armies were soon wielding wootz steel swords to terrify and subdue the wild, hairy tribes of ancient Europe.

Twenty-four centuries later, automakers are relying on electric arc furnaces, hot stamping machines and quenching and partitioning processes that the ancients could never have imagined. These approaches are yielding new ways to tune steel to protect soft human bodies when vehicles crash into each other, as they inevitably do — while curbing car weights to reduce their deleterious impact on the planet.

“It is a revolution,” says Alan Taub, a University of Michigan engineering professor with many years in the industry. The new steels, dozens of varieties and counting, combined with lightweight polymers and carbon fiber-spun interiors and underbodies, hark back to the heady days at the start of the last century when, he says, “Detroit was Silicon Valley.”

Such materials can reduce the weight of a vehicle by hundreds of pounds — and every pound of excess weight that is shed saves roughly $3 in fuel costs over the lifetime of the car, so the economics are hard to deny. The new maxim, Taub says, is “the right material in the right place.”

The transition to battery-powered vehicles underscores the importance of these new materials. Electric vehicles may not belch pollution, but they are heavy — the Volvo XC40 Recharge, for example, is 33 percent heavier than the gas version (and would be heavier still if the steel surrounding passengers were as bulky as it used to be). Heavy can be dangerous.

“Safety, especially when it comes to new transportation policies and new technologies, cannot be overlooked,” Jennifer Homendy, chief of the National Transportation Safety Board, told the Transportation Research Board in 2023. Plus, reducing the weight of an electric vehicle by 10 percent delivers roughly 14 percent improvement in range.

As recently as the 1960s, the steel cage around passengers was made of what automakers call soft steel. The armor from Detroit’s Jurassic period was not much different from what Henry Ford had introduced decades earlier. It was heavy and there was a lot of it.

With the 1965 publication of Ralph Nader’s Unsafe at Any Speed: The Designed-In Dangers of the American Automobile, big automakers realized they could no longer pursue speed and performance exclusively. The oil embargos of the 1970s only hastened the pace of change: Auto steel now had to be both stronger and lighter, requiring less fuel to push around.

In response, over the past 60 years, like chefs operating a sous vide machine to produce the perfect bite, steelmakers — their cookers arc furnaces reaching thousands of degrees Fahrenheit, with robots doing the cooking — have created a vast variety of steels to match every need. There are high-strength, hardened steels for the chassis; corrosion-resistant stainless steels for side panels and roofs; and highly stretchable metals in bumpers to absorb impacts without crumpling.

Tricks with the steel

Most steel is more than 98 percent iron. It is the other couple of percent — sometimes only hundredths of a single percent, in the case of metals added to confer desired properties — that make the difference. Just as important are treatment methods: the heating, cooling and processing, such as rolling the sheets prior to forming parts. Modifying each, sometimes by only seconds, changes the metal’s structure to yield different properties. “It’s all about playing tricks with the steel,” says John Speer, director of the Advanced Steel Processing and Products Research Center at the Colorado School of Mines.

At the most basic level, the properties of steel are about microstructure: the arrangement of different types, or phases, of steel in the metal. Some phases are harder, while others confer ductility, a measure of how much the metal can be bent and twisted out of shape without shearing and creating jagged edges that penetrate and tear squishy human bodies. At the atomic level, there are principally four phases of auto steel, including the hardest yet most brittle, called martensite, and the more ductile austenite. Carmakers can vary these by manipulating the times and temperatures of the heating process to produce the properties they want.

Academic researchers and steelmakers, working closely with automakers, have developed three generations of what is now called advanced high-strength steel. The first, adopted in the 1990s and still widely employed, had a good combination of strength and ductility. A second generation used more exotic alloys to achieve even greater ductility, but those steels proved expensive and challenging to manufacture.

The third generation, which Speer says is beginning to make its way onto the factory floor, uses heating and cooling techniques to produce steels that are stronger and more formable than the first generation; nearly ten times as strong as common steels of the past; and much cheaper (though less ductile) than second-generation steels.

Steelmakers have learned that cooling time is a critical factor in creating the final arrangements of atoms and therefore the properties of the steel. The most rapid cooling, known as quenching, freezes and stabilizes the internal structure before it undergoes further change during the hours or days it could otherwise take to reach room temperature.

One of the strongest types of modern auto steel — used in the most critical structural components, such as side panels and pillars — is made by superheating the metal with boron and manganese to a temperature above 850 degrees Celsius. After becoming malleable, the steel is transferred within 10 seconds to a die, or form, where the part is shaped and rapidly cooled.

In one version of what is known as transformation-induced plasticity, the steel is heated to a high temperature, cooled to a lower temperature and held there for a time and then rapidly quenched. This produces islands of austenite surrounded by a matrix of softer ferrite, with regions of harder bainite and martensite. This steel can absorb a large amount of energy without fracturing, making it useful in bumpers and pillars.

Recipes can be further tweaked by the use of various alloys. Henry Ford was employing alloys of steel and vanadium more than a century ago to improve the performance of steel in his Model T, and alloy recipes continue to improve today. One modern example of the use of lighter metals in combination with steel is the Ford Motor Company’s aluminum-intensive F-150 truck, the 2015 version weighing nearly 700 pounds less than the previous model.

A process used in conjunction with new materials is tube hydroforming, in which a metal is bent into complex shapes by the high-pressure injection of water or other fluids into a tube, expanding it into the shape of a surrounding die. This allows parts to be made without welding two halves together, saving time and money. A Corvette aluminum frame rail, the largest hydroformed part in the world, saved 20 percent in mass from the steel rail it replaced, according to Taub, who coauthored a 2019 article on automotive lightweighting in the Annual Review of Materials Research.

New alloys

More recent introductions are alloys such as those using titanium and particularly niobium, which increase strength by stabilizing a metal’s microstructure. In a 2022 paper, Speer called the introduction of niobium “one of the most important physical metallurgy developments of the 20th century.”

One tool now shortening the distance between trial and error is the computer. “The idea is to use the computer to develop materials faster than through experimentation,” Speer says. New ideas can now be tested down to the atomic level without workmen bending over a bench or firing up a furnace.

The ever-continuing search for better materials and processes led engineer Raymond Boeman and colleagues to found the Institute for Advanced Composites Manufacturing Innovation (IACMI) in 2015, with a $70 million federal grant. Also known as the Composites Institute, it is a place where industry can develop, test and scale up new processes and products.

“The field is evolving in a lot of ways,” says Boeman, who now directs the institute’s research on upscaling these processes. IACMI has been working on finding more climate-friendly replacements for conventional plastics such as the widely used polypropylene. In 1960, less than 100 pounds of plastic were incorporated into the typical vehicle. By 2017, the figure had risen to nearly 350 pounds, because plastic is cheap to make and has a high strength-to-weight ratio, making it ideal for automakers trying to save on weight.

By 2019, according to Taub, 10-15 percent of a typical vehicle was made of polymers and composites, everything from seat components to trunks, door parts and dashboards. And when those cars reach the end of their lives, their plastic and other difficult-to-recycle materials known as automotive shredder residue, 5 million tons of it, ends up in landfills — or, worse, in the wider environment.

Researchers are working hard to develop stronger, lighter and more environmentally friendly plastics. At the same time, new carbon fiber products are enabling these lightweight materials to be used even in load-bearing places such as structural underbody parts, further reducing the amount of heavy metal used in auto bodies.

Clearly, work remains to make autos less of a threat, both to human bodies and the planet those bodies travel over every day, to work and play. But Taub says he is optimistic about Detroit’s future and the industry’s ability to solve the problems that came with the end of the horse-and-buggy days. “I tell students they will have job security for a long time.”

Knowable Magazine

Sunday 02 2024

I’m an astrophysicist mapping the universe with data from the Chandra X-ray Observatory − clear, sharp photos help me study energetic black holes

BlackBull MetaTrader 4
NASA’s Chandra X-ray Observatory detects X-ray emissions from astronomical events. NASA/CXC & J. Vaughan
Giuseppina Fabbiano, Smithsonian Institution

When a star is born or dies, or when any other very energetic phenomenon occurs in the universe, it emits X-rays, which are high-energy light particles that aren’t visible to the naked eye. These X-rays are the same kind that doctors use to take pictures of broken bones inside the body. But instead of looking at the shadows produced by the bones stopping X-rays inside of a person, astronomers detect X-rays flying through space to get images of events such as black holes and supernovae.

Images and spectra – charts showing the distribution of light across different wavelengths from an object – are the two main ways astronomers investigate the universe. Images tell them what things look like and where certain phenomena are happening, while spectra tell them how much energy the photons, or light particles, they are collecting have. Spectra can clue them in to how the event they came from formed. When studying complex objects, they need both imaging and spectra.

Scientists and engineers designed the Chandra X-ray Observatory to detect these X-rays. Since 1999, Chandra’s data has given astronomers incredibly detailed images of some of the universe’s most dramatic events.

The Chandra craft, which looks like a long metal tube with six solar panels coming off it in two wings.
The Chandra spacecraft and its components. NASA/CXC/SAO & J.Vaughan

Stars forming and dying create supernova explosions that send chemical elements out into space. Chandra watches as gas and stars fall into the deep gravitational pulls of black holes, and it bears witness as gas that’s a thousand times hotter than the Sun escapes galaxies in explosive winds. It can see when the gravity of huge masses of dark matter trap that hot gas in gigantic pockets.

An explosion of light and color, and a cloud with points of bright light.
On the left is the Cassiopeia A supernova. The image is about 19 light years across, and different colors in the image identify different chemical elements (red indicates silicon, yellow indicates sulfur, cyan indicates calcium, purple indicates iron and blue indicates high energy). The point at the center could be the neutron star remnant of the exploded star. On the right are the colliding ‘Antennae’ galaxies, which form a gigantic structure about 30,000 light years across. Chandra X-ray Center

NASA designed Chandra to orbit around the Earth because it would not be able to see any of this activity from Earth’s surface. Earth’s atmosphere absorbs X-rays coming from space, which is great for life on Earth because these X-rays can harm biological organisms. But it also means that even if NASA placed Chandra on the highest mountaintop, it still wouldn’t be able to detect any X-rays. NASA needed to send Chandra into space.

I am an astrophysicist at the Smithsonian Astrophysical Observatory, part of the Center for Astrophysics | Harvard and Smithsonian. I’ve been working on Chandra since before it launched 25 years ago, and it’s been a pleasure to see what the observatory can teach astronomers about the universe.

Supermassive black holes and their host galaxies

Astronomers have found supermassive black holes, which have masses ten to 100 million times that of our Sun, in the centers of all galaxies. These supermassive black holes are mostly sitting there peacefully, and astronomers can detect them by looking at the gravitational pull they exert on nearby stars.

But sometimes, stars or clouds fall into these black holes, which activates them and makes the region close to the black hole emit lots of X-rays. Once activated, they are called active galactic nuclei, AGN, or quasars.

My colleagues and I wanted to better understand what happens to the host galaxy once its black hole turns into an AGN. We picked one galaxy, ESO 428-G014, to look at with Chandra.

An AGN can outshine its host galaxy, which means that more light comes from the AGN than all the stars and other objects in the host galaxy. The AGN also deposits a lot of energy within the confines of its host galaxy. This effect, which astronomers call feedback, is an important ingredient for researchers who are building simulations that model how the universe evolves over time. But we still don’t quite know how much of a role the energy from an AGN plays in the formation of stars in its host galaxy.

Luckily, images from Chandra can provide important insight. I use computational techniques to build and process images from the observatory that can tell me about these AGNs.

Three images of a black hole, from low to high resolution, with a bright spot above and right from the center surrounded by clouds.
Getting the ultimate Chandra resolution. From left to right, you see the raw image, the same image at a higher resolution and the image after applying a smoothing algorithm. G. Fabbiano

The active supermassive black hole in ESO 428-G014 produces X-rays that illuminate a large area, extending as far as 15,000 light years away from the black hole. The basic image that I generated of ESO 428-G014 with Chandra data tells me that the region near the center is the brightest, and that there is a large, elongated region of X-ray emission.

The same data, at a slightly higher resolution, shows two distinct regions with high X-ray emissions. There’s a “head,” which encompasses the center, and a slightly curved “tail,” extending down from this central region.

I can also process the data with an adaptive smoothing algorithm that brings the image into an even higher resolution and creates a clearer picture of what the galaxy looks like. This shows clouds of gas around the bright center.

My team has been able to see some of the ways the AGN interacts with the galaxy. The images show nuclear winds sweeping the galaxy, dense clouds and interstellar gas reflecting X-ray light, and jets shooting out radio waves that heat up clouds in the galaxy.

These images are teaching us how this feedback process operates in detail and how to measure how much energy an AGN deposits. These results will help researchers produce more realistic simulations of how the universe evolves.

The next 25 years of X-ray astronomy

The year 2024 marks the 25th year since Chandra started making observations of the sky. My colleagues and I continue to depend on Chandra to answer questions about the origin of the universe that no other telescope can.

By providing astronomers with X-ray data, Chandra’s data supplements information from the Hubble Space Telescope and the James Webb Space Telescope to give astronomers unique answers to open questions in astrophysics, such as where the supermassive black holes found at the centers of all galaxies came from.

For this particular question, astronomers used Chandra to observe a faraway galaxy first observed by the James Webb Space Telescope. This galaxy emitted the light captured by Webb 13.4 billion years ago, when the universe was young. Chandra’s X-ray data revealed a bright supermassive black hole in this galaxy and suggested that supermassive black holes may form by the collapsing clouds in the early universe.

Sharp imaging has been crucial for these discoveries. But Chandra is expected to last only another 10 years. To keep the search for answers going, astronomers will need to start designing a “super Chandra” X-ray observatory that could succeed Chandra in future decades, though NASA has not yet announced any firm plans to do so.The Conversation

Giuseppina Fabbiano, Senior Astrophysicist, Smithsonian Institution

This article is republished from The Conversation under a Creative Commons license.

BlackBull MetaTrader 4

Monday 01 2024

What is Volt Typhoon? A cybersecurity expert explains the Chinese hackers targeting US critical infrastructure

U.S.-China antagonism is particularly acute in the realm of hacking and cybersecurity. AP Photo/Kiichiro Sato
Richard Forno, University of Maryland, Baltimore County

Volt Typhoon is a Chinese state-sponsored hacker group. The United States government and its primary global intelligence partners, known as the Five Eyes, issued a warning on March 19, 2024, about the group’s activity targeting critical infrastructure.

The warning echoes analyses by the cybersecurity community about Chinese state-sponsored hacking in recent years. As with many cyberattacks and attackers, Volt Typhoon has many aliases and also is known as Vanguard Panda, Bronze Silhouette, Dev-0391, UNC3236, Voltzite and Insidious Taurus. Following these latest warnings, China again denied that it engages in offensive cyberespionage.

Volt Typhoon has compromised thousands of devices around the world since it was publicly identified by security analysts at Microsoft in May 2023. However, some analysts in both the government and cybersecurity community believe the group has been targeting infrastructure since mid-2021, and possibly much longer.

Volt Typhoon uses malicious software that penetrates internet-connected systems by exploiting vulnerabilities such as weak administrator passwords, factory default logins and devices that haven’t been updated regularly. The hackers have targeted communications, energy, transportation, water and wastewater systems in the U.S. and its territories, such as Guam.

In many ways, Volt Typhoon functions similarly to traditional botnet operators that have plagued the internet for decades. It takes control of vulnerable internet devices such as routers and security cameras to hide and establish a beachhead in advance of using that system to launch future attacks.

Operating this way makes it difficult for cybersecurity defenders to accurately identify the source of an attack. Worse, defenders could accidentally retaliate against a third party who is unaware that they are caught up in Volt Typhoon’s botnet.

Why Volt Typhoon matters

Disrupting critical infrastructure has the potential to cause economic harm around the world. Volt Typhoon’s operation also poses a threat to the U.S. military by potentially disrupting power and water to military facilities and critical supply chains.

FBI Director Christopher Wray testified at a congressional hearing on Jan. 31, 2024, about Chinese hackers targeting U.S. critical infrastructure.

Microsoft’s 2023 report noted that Volt Typhoon could “disrupt critical communications infrastructure between the United States and Asia region during future crises.” The March 2024 report, published in the U.S. by the Cybersecurity and Infrastructure Security Agency, likewise warned that the botnet could lead to “disruption or destruction of critical services in the event of increased geopolitical tensions and/or military conflict with the United States and its allies.”

Volt Typhoon’s existence and the escalating tensions between China and the U.S., particularly over Taiwan, underscore the latest connection between global events and cybersecurity.

Defending against Volt Typhoon

The FBI reported on Jan. 31, 2024, that it had disrupted Volt Typhoon’s operations by removing the group’s malware from hundreds of small office/home office routers. However, the U.S. is still determining the extent of the group’s infiltration of America’s critical infrastructure.

On March 25, 2024, the U.S. and U.K. announced that they had imposed sanctions on Chinese hackers involved in compromising their infrastructures. And other countries, including New Zealand, have revealed cyberattacks traced back to China in recent years.

All organizations, especially infrastructure providers, must practice time-tested safe computing centered on preparation, detection and response. They must ensure that their information systems and smart devices are properly configured and patched, and that they can log activity. And they should identify and replace any devices at the edges of their networks, such as routers and firewalls, that no longer are supported by their vendor.

Organizations can also implement strong user-authentication measures such as multifactor authentication to make it more difficult for attackers like Volt Typhoon to compromise systems and devices. More broadly, the comprehensive NIST Cybersecurity Framework can help these organizations develop stronger cybersecurity postures to defend against Volt Typhoon and other attackers.

Individuals, too, can take steps to protect themselves and their employers by ensuring their devices are properly updated, enabling multifactor authentication, never reusing passwords, and otherwise remaining vigilant to suspicious activity on their accounts, devices and networks.

For cybersecurity practitioners and society generally, attacks like Volt Typhoon can represent an enormous geopolitical cybersecurity threat. They are a reminder for everyone to monitor what’s going on in the world and consider how current events can affect the confidentiality, integrity and availability of all things digital.

Richard Forno, Principal Lecturer in Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. 

Saturday 02 2024

We’ve been here before: AI promised humanlike machines – in 1958

Frank Rosenblatt with the Mark I Perceptron, the first artificial neural network computer, unveiled in 1958. National Museum of the U.S. Navy/Flickr
Danielle Williams, Arts & Sciences at Washington University in St. Louis

A roomsize computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.

The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past – and the reasons for them. While optimism drives progress, it’s worth paying attention to the history.

The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.

Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive long-form text-based responses and associate images with text to produce new images based on prompts. These systems get better and better as they interact more with users.

A chart with a horizontal row of nine colored blocks through the center and numerous black vertical lines connecting the blocks with sections of text above and below the blocks
A timeline of the history of AI starting in the 1940s. Click the author’s name here for a PDF of this poster. Danielle J. Williams, CC BY-ND

AI boom and bust

In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that the world would “have a machine with the general intelligence of an average human being” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found.

It quickly became apparent that the AI systems knew nothing about their subject matter. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language – a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the perceived failure of the Perceptron.

However, by 1980, AI was back in business, and the first official AI boom was in full swing. There were new expert systems, AIs designed to solve problems in specific areas of knowledge, that could identify objects and diagnose diseases from observable data. There were programs that could make complex inferences from simple stories, the first driverless car was ready to hit the road, and robots that could read and play music were playing for live audiences.

But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because they couldn’t handle novel information.

The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven approaches to machine learning that changed how AI acquired knowledge.

This time also marked a return to the neural-network-style perceptron, but this version was far more complex, dynamic and, most importantly, digital. The return to the neural network, along with the invention of the web browser and an increase in computing power, made it easier to collect images, mine for data and distribute datasets for machine learning tasks.

Familiar refrains

Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “artificial general intelligence” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious.

Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “GPT-4’s performance is strikingly close to human-level performance.”

Three men sit in chairs on a stage
Executives at big tech companies, including Meta, Google and OpenAI, have set their sights on developing human-level AI. AP Photo/Eric Risberg

But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest.

For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions and sarcasm – unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.

Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently say it’s a snowplow 97% of the time.

Lessons to heed

In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.

The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.The Conversation

Danielle Williams, Postdoctoral Fellow in Philosophy of Science, Arts & Sciences at Washington University in St. Louis

This article is republished from The Conversation under a Creative Commons license. 

Tuesday 27 2024

Why federal efforts to protect schools from cybersecurity threats fall short

The cost of safeguarding America’s schools from cybercriminals could run as high as $5 billion. boonchai wedmakawand via Getty Images
Nir Kshetri, University of North Carolina – Greensboro

In August 2023, the White House announced a plan to bolster cybersecurity in K-12 schools – and with good reason. Between 2018 and mid-September 2023, there were 386 recorded cyberattacks in the U.S. education sector and cost those schools $35.1 billion. K-12 schools were the primary target.

The new White House initiative includes a collaboration with federal agencies that have cybersecurity expertise, such as the Cybersecurity and Infrastructure Security Agency, the Federal Communications Commission and the FBI. Technology firms like Amazon, Google, Cloudflare, PowerSchool and D2L have pledged to support the initiative with training and resources.

While the steps taken by the White House are positive, as someone who teaches and conducts research about cybersecurity, I don’t believe the proposed measures are enough to protect schools from cyberthreats. Here are four reasons why:

1. Schools face more cyberthreats than other sectors

Cyberattacks on K-12 schools increased more than eightfold in 2022. Educational institutions draw the interest of cybercriminals due to their weak cybersecurity. This weak cybersecurity provides an opportunity to access networks containing highly sensitive information.

Criminals can exploit students’ information to apply for fraudulent government benefits and open unauthorized bank accounts and credit cards. In testimony to the House Ways and Means Subcommittee on Social Security, a Federal Trade Commission official noted that children’s Social Security numbers are uniquely valuable because they have no credit history and can be paired with any name and date of birth. Over 10% of children enrolled in an identity protection service were discovered to have loans.

Cybercriminals can also use such information to launch ransomware attacks against schools. Ransomware attacks involve locking up a computer or its files and demanding payment for their release. The ransomware victimization rate in the education sector surpasses that of all other surveyed industries, including health care, technology, financial services and manufacturing.

Schools are especially vulnerable to cyberthreats because more and more schools are lending electronic devices to students. Criminals have been found to hide malware within online textbooks and essays to dupe students into downloading it. Should students or teachers inadvertently download malware onto school-owned devices, criminals can launch an attack on the entire school network.

When faced with such an attack, schools can be desperate to comply with criminals’ demands to ensure students’ access to learning.

2. Schools lack cybersecurity personnel

K-12 schools’ poor cybersecurity performance can be attributed, in part, to lack of staff. About two-thirds of school districts lack a full-time cybersecurity position. Those with cybersecurity staff often don’t have the budget for a chief information security officer to oversee and manage the district’s strategy. Often, the IT director takes on this role, but they have a broader responsibility for IT operations without a specific emphasis on security.

3. Schools lack cybersecurity skills

The lack of cybersecurity skills among existing staff hinders the development of strong cybersecurity programs.

Only 10% of educators say that they have a deep understanding of cybersecurity. The majority of students say that they have minimal or no knowledge about cybersecurity. Cybersecurity awareness tends to be even lower in higher-poverty districts, where students have less access to cybersecurity education.

The Cybersecurity and Infrastructure Security Agency plans to provide cybersecurity training to an additional 300 K-12 schools, school districts and other organizations involved in K-12 education in the forthcoming school year. With 130,930 K-12 public schools and 13,187 public school districts in the U.S., CISA’s plan serves only a tiny fraction of them.

4. Inadequate funding

The FCC has proposed a pilot program that would allocate $200 million over three years to boost cyberdefenses. With an annual budget of $66.6 million, this falls short of covering the entirety of cybersecurity costs, given that it will cost an estimated $5 billion to adequately secure the nation’s K-12 schools.

The costs encompass hardware and software procurement, consulting, testing, and hiring data protection experts to combat cyberattacks. Frequent training is also needed to respond to evolving threats. As technology advances, cybercriminals adapt their methods to exploit vulnerabilities in digital systems. Teachers must be ready to address such risks.

Costs are sizable

How much should schools and districts be spending on cybersecurity? Other sectors can serve as a model to guide K-12 schools.

One way to determine cybersecurity funding is by the number of employees. In the financial services industry, for example, these costs range from $1,300 to $3,000 per full-time employee. There are over 4 million teachers in the United States. Setting cybersecurity spending at $1,300 per teacher – the low end of what financial firms spend – would require K-12 schools to spend a total of $5 billion.

An alternate approach is to determine cybersecurity funding relative to IT spending. On average, U.S. enterprises are estimated to spend 10% of their IT budgets on cybersecurity. Since K-12 schools were estimated to spend more than $50 billion on IT in the 2020-21 fiscal year, allocating 10% to cybersecurity would also require them to spend $5 billion.

Another approach is to allocate cybersecurity spending as a proportion of the total budget. In 2019, cybersecurity spending represented 0.3% of the federal budget. Federal, state and local governments collectively allocate $810 billion for K-12 education. If schools set cybersecurity spending at 0.3%, following the example of federal agencies, that would require an annual budget of $2.4 billion.

By contrast, a fifth of schools dedicate less than 1% of their IT budgets – not their entire budgets – to cybersecurity. In 12% of school districts, there is no allocation for cybersecurity at all.The Conversation

Nir Kshetri, Professor of Management, University of North Carolina – Greensboro

This article is republished from The Conversation under a Creative Commons license. 

Saturday 03 2024

Animal CSI: Forensics comes for the wildlife trade


Scientists are using the latest in DNA fingerprinting to combat the multibillion-dollar business of trafficking plants and animals

Campbell’s death was as gruesome as the killers’ previous nine known crimes. Found mutilated in a pool of blood at his home in the district of Albany, South Africa, in June 2016, Campbell had been drugged but was likely in pain before he died from his injuries.

Campbell was a white rhinoceros living on a private reserve, and his killing would be the last hurrah of the now notorious Ndlovu Gang. The three poachers were arrested days later at the Makana Resort in Grahamstown, South Africa, caught red-handed with a bow saw, a tranquilizer dart gun and a freshly removed rhino horn. A variety of evidence, including cellphone records and ballistics analysis of the dart gun, would link them to the crime. But a key element was Campbell’s DNA, found in the horn and on the still-bloody saw.

Among the scientific techniques used to combat poaching and wildlife trafficking, DNA is king, says Cindy Harper, a veterinary geneticist at the University of Pretoria. Its application in animal investigations is small-scale but growing in a field with a huge volume of crime: The value of the illegal wildlife trade is as much as $20 billion per year, Interpol estimates.

“It’s not just a few people swapping animals around,” says Greta Frankham, a wildlife forensic scientist at the Australian Center for Wildlife Genomics in Sydney. “It’s got links to organized crime; it is an enormous amount of turnover on the black market.”

The problem is global. In the United States, the crime might be the illegal hunting of deer or black bears, the importing of protected-animal parts for food or medicinal use, the harvesting of protected cacti, or the trafficking of ivory trinkets. In Africa or Asia, it might be the poaching of pangolins, the globe’s most trafficked mammal for both its meat and its scales, which are used in traditional medicines and magic practices. In Australia, it might be the collection or export of the continent’s unique wildlife for the pet trade.

Techniques used in wildlife forensics are often direct descendants of tools from human crime investigations, and in recent years scientists have adapted and tailored them for use in animals. Harper and colleagues, for example, learned to extract DNA from rhinoceros horns, a task once thought impossible. And by building DNA databases — akin to the FBI’s CODIS database used for human crimes — forensic geneticists can identify a species and more: They might pinpoint a specimen’s geographic origin, family group, or even, in some cases, link a specific animal or animal part to a crime scene.

Adapting this science to animals has contributed to major crime busts, such as the 2021 arrests in an international poaching and wildlife trafficking ring. And scientists are further refining their techniques in the hopes of identifying more challenging evidence samples, such as hides that have been tanned or otherwise degraded.

“Wildlife trafficking investigations are difficult,” says Robert Hammer, a Seattle-based special agent-in-charge with Homeland Security Investigations, the Department of Homeland Security’s arm for investigating diverse crimes, including those involving smuggling, drugs and gang activity. He and his colleagues, he says, rely on DNA and other forensic evidence “to tell the stories of the animals that have been taken.”

First, identify

Wildlife forensics generally starts with a sample sent to a specialized lab by investigators like Hammer. Whereas people-crime investigators generally want to know “Who is it?” wildlife specialists are more often asked “What is this?” — as in, “What species?” That question could apply to anything from shark fins to wood to bear bile, a liver secretion used in traditional medicines.

“We get asked questions about everything from a live animal to a part or a product,” says Barry Baker, deputy laboratory director at the US National Fish and Wildlife Forensics Laboratory in Ashland, Oregon.

Investigators might also ask whether an animal photographed at an airport is a species protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora, or CITES, in which case import or export is illegal without a permit. They might want to know whether meat brought into the US is from a protected species, such as a nonhuman primate. Or they might want to know if a carved knickknack is real ivory or fake, a difference special lighting can reveal.

While some identifications can be made visually, DNA or other chemical analyses may be required, especially when only part of the creature is available. To identify species, experts turn to the DNA in mitochondria, the cellular energy factories that populate nearly every cell, usually in multiple copies. DNA sequences therein are similar in all animals of the same species, but different between species. By reading those genes and comparing them to sequences in a database such as the Barcode of Life, forensic geneticists can identify a species.

To go further to try to link a specimen to a specific, individual animal, forensic geneticists use the same technique that’s used in human DNA forensics, in this case relying on the majority of DNA contained in the cell’s nucleus. Every genome contains repetitive sequences called microsatellites that vary in length from individual to individual. Measuring several microsatellites creates a DNA fingerprint that is rare, if not unique. In addition, some more advanced techniques use single-letter variations in DNA sequences for fingerprinting.

Comparing the DNA of two samples allows scientists to make a potential match, but it isn’t a clincher: That requires a database of DNA fingerprints from other members of the species to calculate how unlikely it is — say, a one-in-a-million chance — that the two samples came from different individuals. Depending on the species’ genetic diversity and its geographic distribution, a valid database could have as few as 50 individuals or it could require many more, says Ashley Spicer, a wildlife forensic scientist with the California Department of Fish and Wildlife in Sacramento. Such databases don’t exist for all animals and, indeed, obtaining DNA samples from even as few as 50 animals could be a challenge for rare or protected species, Spicer notes.

Investigators use these techniques in diverse ways: An animal may be the victim of a crime, the perpetrator or a witness. And if, say, dogs are used to hunt protected animals, investigators could find themselves with animal evidence related to both victim and suspect.

For witnesses, consider the case of a white cat named Snowball. When a woman disappeared in Richmond, on Canada’s Prince Edward Island, in 1994, a bloodstained leather jacket with 27 white cat hairs in the lining was found near her home. Her body was found in a shallow grave in 1995, and the prime suspect was her estranged common-law husband, who lived with his parents and Snowball, their pet. DNA from the root of one of the jacket hairs matched Snowball’s blood. Though the feline never took the stand, the cat’s evidence spoke volumes, helping to clinch a murder conviction in 1996.

$[$PB_DROPZONE,id:knowable-newsletter-article-promo$]$

A database for rhinos

The same kind of specific linking of individual animal to physical evidence was also a key element in the case of Campbell the white rhino. Rhino horn is prized: It’s used in traditional Chinese medicine and modern variants of the practice to treat conditions from colds to hangovers to cancer, and is also made into ornaments such as cups and beads. At the time of Campbell’s death, his horn, weighing north of 10 kilograms, was probably worth more than $600,000 — more than its weight in gold — on the black market.

The DNA forensics that helped nab the Ndlovu Gang started with experiments in the early 2000s, when rhino poaching was on the rise. Scientists once thought rhino horns were nothing but densely packed hair, lacking cells that would include DNA, but a 2006 study showed that cells, too, are present. A few years later, Harper’s group reported that even though these cells were dead, they contained viable DNA, and the researchers figured out how to access it by drilling into the horn’s core.

In 2010, a crime investigator from South Africa’s Kruger National Park dropped by Harper’s lab. He was so excited by the potential of her discovery to combat poaching that he ripped a poster describing her results off the wall, rolled it up and took it away with him. Soon after, Harper launched the Rhinoceros DNA Index System, or RhODIS. (The name is a play on the FBI’s CODIS database, for Combined DNA Index System.)

Today, thanks to 2012 legislation from the South African government, anyone in that nation who handles a rhino or its horn — for example, when dehorning animals for the rhinos’ own protection — must send Harper’s team a sample. RhODIS now contains about 100,000 DNA fingerprints, based on 23 microsatellites, from African rhinoceroses both black and white, alive and long dead, including most of the rhinos in South Africa and Namibia, as well as some from other nations.

RhODIS has assisted with numerous investigations, says Rod Potter, a private consultant and wildlife crime investigator who has worked with the South African Police Service for more than four decades. In one case, he recalls, investigators found a suspect with a horn in his possession and used RhODIS to identify the animal before the owner even knew the rhino was dead.

In Campbell’s case, in 2019 the three poachers were convicted, to cheers from observers in the courtroom, of charges related to 10 incidents. Each gang member was sentenced to 25 years in prison.

Today, as rhino poaching has rebounded after a pandemic-induced lull, the RhODIS database remains important. And even when RhODIS can’t link evidence to a specific animal, Potter says, the genetics are often enough to point investigators to the creature’s approximate geographic origin, because genetic markers vary by location and population. And that can help illuminate illegal trade routes.

Elephants also benefit

DNA can make a big impact on investigations into elephant poaching, too. Researchers at the University of Washington in Seattle, for example, measured DNA microsatellites from roving African elephants as well as seized ivory, then built a database and a geographical map of where different genetic markers occur among elephants. The map helps to determine the geographic source of poached, trafficked tusks seized by law enforcement officials.

Elephants travel in matriarchal herds, and DNA markers also run in families, allowing the researchers to determine the relatedness of different tusks, be they from parents, offspring, siblings or half-siblings. When they find tusks from the same elephant or clan in different shipments with a common port, it suggests that the shipments were sent from the same criminal network — which is useful information for law enforcement officials.

This kind of information came in handy during a recent international investigation, called Operation Kuluna, led by Hammer and colleagues at Homeland Security Investigations. It started with a sting: Undercover US investigators purchased African ivory that was advertised online. In 2020, the team spent $14,500 on 49 pounds of elephant ivory that was cut up, painted black, mixed with ebony and shipped to the United States with the label “wood.” The following year, the investigators purchased about five pounds of rhino horn for $18,000. The undercover buyers then expressed interest in lots more inventory, including additional ivory, rhino horns and pangolin scales.

The promise of such a huge score lured two sellers from the Democratic Republic of the Congo (DRC) to come to the United States, expecting to seal the $3.5 million deal. Instead, they were arrested near Seattle and eventually sentenced for their crimes. But the pair were not working alone: Operations like these are complex, says Hammer, “and behind complex conspiracies come money, organizers.” And so the investigators took advantage of elephant genetic and clan data which helped to link the tusks to other seizures. It was like playing “Six Degrees of Kevin Bacon,” says Hammer.

Shortly after the US arrests, Hammer’s counterparts in Africa raided warehouses in the DRC to seize more than 2,000 pounds of ivory and 75 pounds of pangolin scales, worth more than $1 million.

Despite these successes, wildlife forensics remains a small field: The Society for Wildlife Forensic Science has fewer than 200 members in more than 20 countries. And while DNA analysis is powerful, the ability to identify species or individuals is only as good as the genetic databases researchers can compare their samples to. In addition, many samples contain degraded DNA that simply can’t be analyzed — at least, not yet.

Today, in fact, a substantial portion of wildlife trade crimes may go unprosecuted because researchers don’t know what they’re looking at. The situation leaves scientists stymied by that very basic question: “What is this?”

For example, forensic scientists can be flummoxed by animal parts that have been heavily processed. Cooked meat is generally traceable; leather is not. “We have literally never been able to get a DNA sequence out of a tanned product,” says Harper, who wrote about the forensics of poaching in the 2023 Annual Review of Animal Biosciences. In time, that may change: Several researchers are working to improve identification of degraded samples. They might work out ways to do so based on the proteins therein, says Spicer, since these are more resistant than DNA is to destruction by heat or chemistry.

Success, stresses Spicer, will require the cooperation of wildlife forensic scientists around the world. “Anywhere that somebody can get a profit or exploit an animal, they’re going to do it — it happens in every single country,” she says. “And so it’s really essential that we all work together.”

Saturday 27 2024

We urgently need data for equitable personalized medicine

OPINION: A massive bias in medical studies toward men of European origin means that genetic variants in understudied populations don’t get the focus they deserve

Warfarin is a powerful blood thinner and a leading drug for cardiovascular disease worldwide. But in South Africa, it is among the top four drug varieties leading to hospitalization from adverse drug reactions. It’s reasonable to suppose that the drug has similar problematic effects farther across sub-Saharan Africa, though the national data needed to show it are lacking.

The fact that warfarin is riskier in some populations than others isn’t a surprise. Different geographic regions tend to host people with slightly different genetic makeups, and sometimes those genetic differences lead to radically different reactions to drugs. For certain people, a higher dosage of warfarin is fine; for others, it’s dangerous. Researchers have known this for decades.

The problem is that the majority of medical research, including genetic research, is still done mainly on one subset of the world’s population: men of Northern European origin. This means that negative drug-gene interactions in other, less well-studied populations can fly beneath the radar. In the case of warfarin, one study concluded that using someone’s genetic information to help guide their drug dosing would benefit 18 percent to 24 percent of people categorized as white, but have no benefit for people identified as Black, Chinese or Japanese.

While that study is a decade old, the general point still holds true: A bias in our current understanding of the genetics of different populations means that some people would be helped far more than others by genetically informed personalized medicine.

As a bioinformatician, I am now focusing my attention on gathering the statistics to show just how biased medical research data are. There are problems across the board, ranging from which research questions get asked in the first place, to who participates in clinical trials, to who gets their genomes sequenced. The world is moving toward “precision medicine,” where any individual can have their DNA analyzed and that information can be used to help prescribe the right drugs in the right dosages. But this won’t work if a person’s genetic variants have never been identified or studied in the first place.

It’s astonishing how powerful our genetics can be in mediating medicines. Take the gene CYP2D6, which is known to play a vital role in how fast humans metabolize 25 percent of all the pharmaceuticals on the market. If you have a genetic variant of CYP2D6 that makes you metabolize drugs more quickly, or less quickly, it can have a huge impact on how well those drugs work and the dangers you face from taking them. Codeine was banned from all of Ethiopia in 2015, for example, because a high proportion of people in the country (perhaps 30 percent) have a genetic variant of CYP2D6 that makes them quickly metabolize that drug into morphine, making it more likely to cause respiratory distress and even death.

Researchers have identified over a hundred different CYP2D6 variants and there are likely many, many more out there that we don’t yet know the impacts of — especially in understudied populations.

Back in 2016, researchers published an important article looking at more than 2,500 genome-wide association studies done up to that time. These are studies that scan the genomes of thousands of people to find variants associated with disease traits. What the researchers found was disturbing: While there had been some improvement in diversity since 2009, still 81 percent of the nearly 35 million samples in those studies came from people of European descent.

You might expect that, since everyone knows this is a problem, it would have gotten much better over recent years. It hasn’t. In 2021, another study of genome-wide association studies showed that the European-origin proportion had increased, not decreased, from 81 percent to 86 percent.

It’s not just genome-wide studies that have this issue. Direct-to-consumer genetic sequencing services like 23andMe are also skewed: One analysis suggests that 95 percent of the participants have predicted European ancestry, compared to just 2 percent African. And in PharmGKB, one of the world’s leading databases of drug-gene interactions, 64 percent of the data come from people of European ancestry, though this group makes up only 16 percent of the global population. Indigenous Americans account for the smallest amount of the data (just 0.1 percent). But when taking global population into account, it is Central and South Asian people who are least well represented, with only 2 percent of the data but 26 percent of the global population.

People of African descent have the greatest genetic diversity on the planet (because humanity originated in Africa), and so arguably they deserve the greatest amount of study. But this is hardly the situation. This population makes up just 4 percent of the PharmGKB dataset, for example.

Geographic ancestry isn’t the only factor that’s biased. Women make up only 38 percent of participants in studies of drug effectiveness and pharmacokinetics, for example. Because of gender bias all along the line, women experience adverse drug reactions nearly twice as often as men. And this doesn’t even scratch the surface of people with genetic conditions — like my son who has Down syndrome — or other disabilities.

There are some good efforts working to correct these problems. On 18 October 2023, researchers announced plans to create one of the largest-yet databases of genomes exclusively from people with African ancestry. The project aims to recruit at least 500,000 volunteers (for comparison, tens of millions of people globally have had their genomes sequenced to date). This is a great effort; more should follow suit.

Everyone stands to gain from more diverse work. Right now, one clue that researchers use to help determine whether a genetic mutation might be linked to disease (or not) is whether that mutation is rare (or not); if a variant is extremely uncommon, this is one hint that it might be pathogenic (since most people don’t have a given disease). But this could be sending researchers chasing after red herrings. One study published in March 2023, for example, performed whole-genome sequencing on 180 people from 12 indigenous African populations, and found that of 154 mutations labeled “pathogenic” or “likely pathogenic” in a well-known database, 44 were at least five times more frequent in at least one of these African populations. This suggests that those mutations might be benign after all.

The International Covenant on Economic, Social and Cultural Rights, adopted by the United Nations General Assembly on 16 December 1966, recognizes everyone’s rights to enjoy the benefits of scientific progress. But that is not happening yet. We need to ramp up representation in genetic and medical studies to ensure fair treatment for all.