YTCENSOREDNAZIS.jpg

By Susan Duclos – All News PipeLine

On August 1, 2017, the YouTube blog provided an update on their Orwellian censorship policies, where under the guise of updating users on their “commitment to fight terror content online,” they will be implementing full Nazi fascist tactics to hide content that is “controversial but do not violate our policies,” by using “cutting-edge machine learning technology designed to help us identify and remove violent extremism and terrorism-related content in a scalable way.”

Tougher standards: We’ll soon be applying tougher treatment to videos that aren’t illegal but have been flagged by users as potential violations of our policies on hate speech and violent extremism. If we find that these videos don’t violate our policies but contain controversial religious or supremacist content, they will be placed in a limited state. The videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes. We’ll begin to roll this new treatment out to videos on desktop versions of YouTube in the coming weeks, and will bring it to mobile experiences soon thereafter. These new approaches entail significant new internal tools and processes, and will take time to fully implement.

In other words ladies an gentlemen if political commentary, or religious commentary, is considered non-compliant with the ideology of the groups they are using to “train” their bots, which they described as “expert NGOs and institutions through our Trusted Flagger program, including the Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue,” those videos will be hidden, there will be no ability to “share” them using the icons meant for that purpose, the creators won’t be able to monetize them, people will not be able to “like” them, nor have the ability to comment on them.

If that isn’t going full fascist Nazi on Independent media, what would be?

They list three specific ways they claim they have seen “positive progress.”

The first is listed as “Speed and efficiency,” where it states the following: “Our machine learning systems are faster and more effective than ever before. Over 75 percent of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.” The second is listed as “Accuracy,” where they claim “The accuracy of our systems has improved dramatically due to our machine learning technology. While these tools aren’t perfect, and aren’t right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed.” read full article