NEW YORK ( MainStreet) — There is race discrimination, gender discrimination, sexual orientation discrimination, religious discrimination and class discrimination. Now academia is sounding a clarion call for a new type of bias - data discrimination. And it could hurt you in the pocket book.

Apparently, the indiscriminate collection of "big data" is resulting in discrimination. People are being treated prejudicially on the basis of what their data footprint says about them.

Two researchers are now lobbying for "data due process" for citizens. They are advocating for a legal recourse to be made available that permits people to have a legal right to understand how data analytics are used in determinations made against them - such as denial of health insurance or of a job.

Kate Crawford, a Microsoft researcher, visiting professor at MIT and senior fellow at the Information Law Institute at New York University (NYU), made this claim during MIT Technology Review's recent EmTech conference. She has devoted much of her career to researching the politics and ethics of data.

She has been publishing papers about this subject since 2006, although papers mentioning big data specifically start in 2011. She researches how people engage with networked technologies and how networked data becomes part of our culture. Now she is calling for justice for big data "victims." Crawford presented a paper titled"Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms" that was published October 1 in the "Social Science Research Network" and elsewhere. The paper was co-authored by Jason Schultz, a professor of law and the director of the Technology Law & Policy Clinic at NYU School of Law.

"We are both advocates for protecting privacy," Schultz said. "The dangers are complex, but one of the main risks is discrimination based on what algorithms determine about you without any right to due process."

There is also a risk to any real form of anonymity, and that exposure becomes all the more dangerous when it compromises your privacy when on the job search or trying to get the right medical insurance.

"Unlike previous computational models that exploit personally identifiable information (PII) directly, such as behavioral targeting, big data has exploded the definition of PII to make many more sources of data personally identifiable," Crawford and Schultz write in their study. "Big data presents substantial privacy concerns – risks of bias or discrimination based on the inappropriate generation of personal data – a risk we call 'predictive privacy harm.' Predictive analysis and categorization can pose a genuine threat to individuals, especially when it is performed without their knowledge or consent."

This was confirmed by a 2013 study at Cambridge University in the U.K. that determined an analysis of "likes" on Facebook could accurately predict the demographic information of the user. Such an analysis determined factors like a user's race, age and sex.