How to resolve this prejudice when you are discriminated against by AI

We also talked a lot about AI discrimination, such as appearance bias, language discrimination, etc., but AI's discrimination is obviously not only in this shallow field. Behind its appearance and language discrimination, it is actually a one-sided judgment of AI on this society.

Then, the AI ​​discrimination problem is not a simple algorithm problem. Today we will talk about AI's prejudice and what we can do in the face of it.

When you are discriminated against by AI, how to resolve this prejudice

The human being who started as a leader: the root cause of AI discrimination

First of all, we need to be clear about the reasons for AI discrimination. At present, there are basically two aspects.

1. Limitations of data and algorithms.

The first is the limitation of data. AI's judgment of things is not out of thin air or random, it must go through a series of training and learning. Then, if you want to train its ability in a certain aspect, it is necessary to collect data from related fields for learning. In this respect, if the amount of data training is insufficient, it will lead to incomplete learning of AI, and it may make a wrong judgment.

Another aspect of data limitations comes from the data itself. For example, if a group has some common characteristics, then AI will use most of these common feature data as labels. Once the object is not in this group feature, or belongs to a few characteristics of this group, it may take a negative attitude.

Second is the limitation of the algorithm. When programmers set up AI learning programs, they can't filter every negative message. Therefore, after a large amount of data learning, the AI ​​automatically matches the keywords between different groups and then makes a judgment. For example, the problem of male and female differentiation in the profession.

In fact, the problem of discrimination against all AIs we are currently facing is basically centered on these two aspects. After the data learning is not enough, or the data is learned enough, the scope of learning exceeds the content set by the programmer, and then begins to learn and judge. This has led to the occurrence of discrimination.

2. The strengthening of human inherent prejudice.

But the root cause of AI's discrimination is not the data and algorithms. To a certain extent, AI discrimination is actually a manifestation and enhancement of human prejudice. Human beings are good at language restraint or show a superficial form of communication. In the long run, people seem to regard hiding their own prejudice against others as a quality virtue. The problem becomes that it is not important to discriminate in your heart. If you do well, you are a good person.

In this regard, the contrast after the exposure of the charity was particularly prominent.

One of the meanings of AI's emergence is to break the situation of people's seemingly friendly self-deception. It re-energizes things that are deliberately hidden and seem to be well hidden. This is the manifestation, then why is it said that it will be strengthened?

First of all, AI learning is an exclusion process, such as semantic segmentation of images. To find the eyeball, it is necessary to first segment the image and negate other things that are unsuitable. In the same way, to recruit a suitable employee, it is also the focus to eliminate inappropriate options. The process of negation is an intensive process.

Second, it is based on the special properties of AI. AI is the first man-made thing in his history to have his own judgment. When it comes to some rules that are set against the human face, the people are surprised. On the other hand, it is See yourself further. It has ruthlessly folded the face of mankind, and naturally it has brought more attention.

The limitations of the algorithm and the enhancement of the inherent concept of human beings can explain the reasons for the AI ​​discrimination.

AI discrimination is not only a large number of categories, but the impact may be large

If we are only discriminated against by AI and know that its discrimination is caused by algorithms and human problems, then there is no need to worry. After all, discrimination is everywhere, psychologically strong enough, and any discrimination is not enough to cause fire damage. On the shameless spirit of self-defeating, no one seems to be able to compare with modern people.

However, when AI is not only discriminating against you, but also making judgments about you, deciding your life, work, status... Can you still discriminate against it?

For example, a concept is very popular now, using AI for recruitment.

In theory, it is able to screen newcomers who best meet the needs of the company by learning from existing employee profiles. Labeling these employees is a part of learning. Strong ability, good eloquence, and internship experience may be screened out. However, in these samples, AI finds other highly similar but not related to recruitment. Labeled?

For example, these people may have more boys. It may think that girls are not suitable for this job; if there are more people in urban hukou, it may be screened out from rural areas; if there are more people before the age of 23, then 23 years old. Later people may be considered unsuitable...

You see, just these aspects have already dealt with gender discrimination, geographical discrimination, and age discrimination. Although people develop AI for recruitment in order to avoid the interviewer's decision to stay with the subjective impression, too much objective AI recruitment will also raise some issues worth considering.

Another example is the placement of AI in police work.

Most recently mentioned is the use of AI to help identify and even predict criminals. For example, in a paper published by Shanghai Jiaotong University last year, it can be judged by the appearance of a person to judge whether the person has a tendency to commit crimes. To put it simply, it is to see if you have a "Prisoner's Face". The US police have also recently tried to deploy a police system to predict the personnel-related, geographical areas, etc. of the crime, and then to strengthen surveillance. The United Kingdom has adopted a similar approach to crime prevention.

There are naturally serious problems in this. Can you distinguish criminals by looking at "faces"? Although appearance discrimination does exist all the time, it is basically based on the flaws of the five senses. Now it has been upgraded. It is hard to accept that the projected crimes in the United States tend to target black and black concentrated areas as key predictors. Especially in a country like the United States, where discrimination against blacks is very taboo, even if it is not an individual, this move will cause people to hate. Not to mention the United Kingdom, after the surveillance system was running for a while, the poor were specifically targeted, so they eventually had to go offline.

From these few cases, we found that AI still has discriminatory, racial, and rich and poor discrimination. In addition, AI's discrimination is deeply embedded in all walks of life. Even in the future, we may actually wear a contact lens like Black Mirror, and then we can see the safety of people coming across the glasses. Identification. High security is not necessarily true. Is it possible to brush it out?

In other words, AI discrimination is a comprehensive thing. The problem is coming again: Is it necessary that we will abandon the AI? Are we humans so glassy?

Of course, it is unrealistic to abandon AI. This is an act of squandering food. After all, it is still profoundly changing our society. However, with the large-scale application of AI, efforts to alleviate or even eliminate AI's prejudice have become imminent.

The solution to AI discrimination

A Google database has been found to have significant gender discrimination. For example, a simple question and answer, Paris: France, then Tokyo: X, the system will give X = Japan. If it is "father: doctor", it will correspond to "mother: nurse"; "man: programmer", "woman: housewife", and so on.

Researchers have created a "hard error correction" approach. To put it simply, put the corresponding relationship judged by the system on a forum platform to post and ask if it is appropriate. If half of the people think it is not appropriate, then this analogy is not appropriate. After doing this for a while, the improvement effect of word matching on AI is very significant.

Although this method works, it is obvious that it can only be applied to some areas with a small data range. For example, the recruitment of a company can train the AI ​​before recruiting, and then find out the problem, and then use this method for hard error correction. But if you want to use it to solve all the problems of AI, it is a bit unrealistic.

Researchers at Columbia University have developed another way. Advances in deep learning research are an important factor in the resurrection of AI, but the black box problem of AI remains unresolved. Therefore, these researchers tried to open the black box to figure out why AI made some wrong decisions. They developed a software called "DeepX plore" that spoofs the system in a neural network to expose itself to defects. This software activates almost 100% of the neurons in the system, allowing the entire neural network to scan for errors.

When you are discriminated against by AI, how to resolve this prejudice

There are other people who suggest solving problems from the programming. But fundamentally, human beings should take AI's discrimination as a mirror and carefully examine the roots they have planted in social activities. As a result, we continue to reduce discrimination and prejudice in their actual sense. Only when the content of learning disappears can AI avoid the problem of discrimination.

But one problem we still have to pay attention to is that it is not appropriate to use "prejudice" and "discrimination" to summarize all the "differential treatments" of AI. From the time of human development to the present, different occupations have different requirements for people. The men’s arable women’s weaving reflects the natural division of labor after long-term development of history. Then, when dealing with the problem of AI discrimination, it is not a simple algorithm to correct it. It also involves various aspects such as sociology and economics.

In this way, we may still have a long way to go to resolve the discrimination and prejudice of AI. What I can do now is to give birth to human beings.

Automotive Cable

The automobile cable is a kind of connecting wire inside the automobile, which is mainly used for current transmission of batteries, instruments, lamps, etc. Due to the particularity of the automobile industry, the manufacturing process of automobile cables is also more special than other ordinary wires. In most cases, it is required that the car cable will be cold and high temperature resistant.


The selection principle of automobile wires is determined according to the insulation degree of wires, the current passing through and the required mechanical strength. For electrical equipment working for a long time, the conductor with 60% of the actual current carrying capacity can be selected; For electrical equipment that works for a short time, conductors with actual current carrying capacity between 60% and 100% can be selected. Automobile electric system belongs to low-voltage power supply, with large working current and voltage loss. Excessive voltage loss will affect the normal operation of electrical equipment. Therefore, when selecting the conductor section, it is necessary to ensure that the voltage loss does not exceed a certain value: 12V system is not more than 0.5V, 24V system is not more than 1.0V. The actual working current of the conductor shall not be greater than the allowable current carrying capacity of the conductor.

Automobiles Wire,Automobile Insulated Wire,Pvc Insulated Automobile Wire,Automotive Copper Wire

Ruitian Cable CO.,LTD. , https://www.hbruitiancable.com