The dirty little secret is out about artificial intelligence.

No, not the one about machines taking over the world. That’s an old one. This one is more insidious. Data scientists, AI experts and others have long suspected it would be a problem. But it’s only within the last couple of years, as AI or some version of machine learning has become nearly ubiquitous in our lives, that the issue has come to the forefront.

AI is prejudiced. Sexism. Ageism. Racism. Name an -ism, and more likely than not, the results produced by our machines have a bias in one or more ways. But an emerging think tank dubbed Diversity.ai believes our machines can do better than their creators when it comes to breaking down stereotypes and other barriers to inclusion.

The problem has been well documented: in 2015, for example, Google’s photo app embarrassingly tagged some black people as gorillas. A recent pre-print paper reported widespread human bias in the metadata for a popular database of Flickr images used to train neural networks. Even more disturbing was an investigative report last year by ProPublica that found software used to predict future criminal behavior—a la the film “Minority Report”—was biased against minorities.

For Anastasia Georgievskaya, the aha moment that machines can learn prejudice came during work on an AI-judged beauty contest developed by Youth Laboratories, a company she co-founded in 2015 that uses machine vision and AI to study aging. Almost all the winners picked by the computer jury were white.

“I thought that discrimination by the robots is likely, but only in a very distant future,” says Georgievskaya by email. “But when we started working on Beauty.AI, we realized that people are discriminating [against] other people by age, gender, race and many other parameters, and nobody is talking about it.”

Algorithms can always be improved, but a machine can only learn from the data it is fed.

“We struggled to find the data sets of older people and people of color to be able to train our deep neural networks,” Georgievskaya says. “And after the first and second Beauty.ai contests, we realized that it is a major problem.”

Age bias in available clinical data has frustrated Alex Zhavoronkov, CEO of Insilico Medicine, Inc., a bioinformatics company that combines genomics, big data analysis and deep learning for drug discovery related to aging and age-related diseases. A project called Aging.ai that uses a deep neural network trained on hundreds of human blood tests to predict age had high errors in older populations.

“Our company came to study aging not only because we want to extend healthy productive longevity, but to fix one important problem in the pharmaceutical industry—age bias,” Zhavoronkov says. “Many clinical trials cut off patient enrollment by age, and thousands of healthy but older people miss their chance to get a treatment.”

Georgievskaya and like-minded scientists not only recognized the problem, they started to study it in depth—and do something about it.

“We realized that it’s essential to develop routines that test AI algorithms for discrimination and bias, and started experimenting with the data, methods and the metrics,” she says. “Our company is not only focused on beauty, but also on healthcare and visual-imaging biomarkers of health. And there we found many problems in age, gender, race and wealth bias.”

As Zhavoronkov envisions it, Diversity.ai will bring together a “diverse group of people with a very ‘fresh’ perspective, who are not afraid of thinking out of the box. Essentially, it is a discussion group with many practical projects and personal and group goals.”

His own goal? “My personal goal is to prove that the elderly are being discriminated [against], and develop highly accurate multi-modal biomarkers of chronological and biological aging. I also want to solve the racial bias and identify the fine equilibrium between the predictive power and discrimination in the [deep neural networks].”

The group’s advisory board is still coming together, but already includes representatives from Elon Musk’s billion-dollar non-profit AI research company Open.AI, computing company Nvidia, a leading South Korean futurist, and the Future of Humanity Institute at the University of Oxford.

Nell Watson, founder and CEO of Poikos, a startup that developed a 3D body scanner for mobile devices, is one of the advisory board members. She’s also an adjunct in the Artificial Intelligence and Robotics track at Singularity University. She recently began OpenEth.og, what she calls a non-profit machine ethics research company that hopes to advance the field of machine ethics by developing a framework for analyzing various ethical situations.

She sees OpenEth.org and Diversity.ai as natural allies toward the goal of developing ethical, objective AI.

She explains that the OpenEth team is developing a blockchain-based public ledger system capable of analyzing contracts for adherence to a structure of ethics.

“[It] provides a classification of the contract's contents, without necessarily needing for the contract itself to be public,” she explains. That means companies can safeguard proprietary algorithms while providing public proof that it adheres to ethical standards.

“It also allows for a public signing of the ownership/responsibility for a given agent, so that anyone interacting with a machine will know where it came from and whether the ruleset that it's running under is compatible with their own values,” she adds. “It's a very ambitious project, but we are making steady progress, and I expect it to play a piece of many roles necessary in safeguarding against algorithmic bias.”

Georgievskaya says she hopes Diversity.ai can hold a conference later this year to continue to build awareness around issues of AI bias and begin work to scrub discrimination from our machines.

“Technologies and algorithms surround us everywhere and became an essential part of our daily life,” she says. “We definitely need to teach algorithms to treat us in the right way, so that we can live peacefully in [the] future.”