Everyone is laughing at the Google Gemini AI rollout. But it’s no joke.
The problem is more nefarious than historically inaccurate generated images.
The manipulation of AI is just one aspect of broader “discrimination by algorithm” being built into corporate America, and it could cost you job opportunities and more.
When Gemini was asked to produce pictures of white people, it refused, saying it couldn’t fulfill the request because it “reinforces harmful stereotypes and generalizations about people based on their race.”
But it had no trouble generating pictures of a female pope, non-white Vikings and a black George Washington.
Microsoft’s AI Imaging tool has its own problems, generating sexually explicit and violent images.
Clearly AI imaging has gone off the rails.
While Google’s CEO admitted Gemini’s results were “biased” and “unacceptable,” that’s not a bug but a feature — much as “anti-racism” theory gave rise to openly racist diversity, equity and inclusion practices.
As one of us (William) recently explained to The Post: “In the name of anti-bias, actual bias is being built into the systems. This is a concern not just for search results, but real-world applications where ‘bias free’ algorithm testing actually is building bias into the system by targeting end results that amount to quotas.”
Our Equal Protection Project (EqualProtect.org) sounded the alarm almost a year ago, when we exposed the use of algorithms to manipulate pools of job applicants in LinkedIn’s “Diversity in Recruiting’’ function.
LinkedIn justified the racial and other identity-group manipulation as necessary “to make sure people have equal access” to job opportunities, but what it meant by “equal access” was actually preferential treatment.
Such bias operates in the shadows. Job candidates don’t see how the algorithms affect their prospects.
Algorithms can be — and are — used to elevate certain groups over others.
But it’s not limited to LinkedIn.
The Biden administration has issued an executive order to require bias-free algorithms, but under the progressive DEI rubric built into this policy, the lack of bias is demonstrated not on equal treatment, but on “equity.”
Equity is a codeword for quotas.
In the world of “bias-free” algorithm testing, bias is built-in to achieve equity.
What happened with Gemini is an example of such programming.
It’s one thing to get a bad search result; it’s quite another thing to lose a job opportunity.
As attorney Stuart Baker, an expert on such deck-stacking, explained at an EPP event, “preventing bias . . . in artificial intelligence is almost always going to be code for imposing stealth quotas.”
The insidious reach of “bias-free” bias will grow.
Discrimination by algorithm has the potential to manipulate every major detail of our lives in order to obtain group results and group quotas.
These algorithms are designed to take the scourge of DEI and secretly bring it into every facet of life and the economy.
People are purposely “teaching” AI that images of black Vikings are a more equitable result than the truth.
Because Big Tech already knows a lot about you, including your race and ethnicity, it’s not hard to imagine discrimination by algorithm manipulating access to a host of goods and services.
Get turned down for a job, a loan, an apartment, or college admission? Could be a “bias free” algorithm at work.
But you’ll almost never be able to prove it, because the algorithms operate out of sight and undercover, certified as “bias free” because they build bias into the system to achieve quotas.
You get the picture.
Discrimination by algorithm is a threat to equality and must be stopped.
William A. Jacobson is a clinical professor of law at Cornell and founder of the Equal Protection Project, where Kemberlee Kaye is operations and editorial director.
This story originally appeared on NYPost