Bcc: FYI
________
Greetings,
I’m writing this somewhat off-topic and technical posting because it may be relevant to our continued ability to stay in touch. It may even be relevant to whether or not we are actually in touch right now, and relevant to whether we can even know if we’re in touch or not. On to our story…
Google is a pioneering, aggressive, systems-oriented company that has created a role for itself as a setter of standards. Its search engine is the standard tool for accessing the net, gmail sets the standard for web-based email, and then there’s google maps, google earth, and the list goes on. You’ve really got to hand it to them. Their position, and respect, in the online world is somewhat comparable to Microsoft’s position in the PC world. Such a company sets standards partly by good design, backed by immense R&D resources, and partly by virtue of its clout in the marketplace.
One of the areas in which Google seems to be setting a standard is that of spam filtering. If you have a gmail account, and if Google’s filter doesn’t like an incoming message, it goes into your spam folder. You have no real control over the filter, and it sometimes captures perfectly normal messages from frequent correspondents.
If you don’t check your spam folder you may occasionally miss messages you don’t want to miss, and if you do check you may have to wade through pages of spam headers — and most of the time you’d only find spam there. So I imagine lots of people never check, and may not even suspect that some messages have been wrongly withheld.
I don’t know the nature of the filter, but I do believe it has been used for censorship purposes, although not in a consistent way. There was a period, several months ago, when every posting I sent out that had a controversial subject line was captured by the filter, and I had to go on the web and specifically declare that the posting was ‘not spam’. And then that suddenly stopped happening. It felt like an experiment, a market test, to see what level of complaints they might get, and who would even notice. It makes a lot of sense to do that kind of research, if you’re thinking about getting heavy with filtering sometime down the line.
And over the past several months, I’ve have repeated difficulties accessing mail-sending servers. Suddenly and repeatedly my IP address is blocked from reaching outgoing servers. So far I’ve been able to get around this by re-booting my modem, and getting a new IP address, but that’s a loophole that could easily be remedied. Riseup.net, a service by and for progressive activists, has repeatedly had its access to servers blocked, and has experienced difficulty in getting reconnected.
I called my ISP, and they were very vague in their explanations. I had to press to learn that they are subscribing to a firewall service, that promises to keep them up-to-date with ‘known spam IP addresses’, so the ISP can block them from sending. They didn’t admit this in so many words, but I teased out enough facts to figure out what’s going on.
Now imagine that you are selling a firewall service. You want a sophisticated filter, so you can offer a competitive product, you want the filter to have credibility in the market’s eyes, and you’d rather not invest a lot of time and money in artificial-intelligence R&D. What better solution than to license Google’s filter, perhaps add more filtering of your own, and then say ‘Google Inside’, just like ‘Intel Inside’. And given Google’s marketing creativity in the visible market, it would be surprising if they weren’t licensing their filtering technology on a confidential basis to firewall providers.
If this is indeed the scenario, then what it means is that Google maintains a real-time database of ‘spam IP addresses’, most likely categorized in some way, which licensees (firewall providers) can customize and resell to their client ISPs. As I’ve argued above, we can assume that Google’s spam database would be the industry standard in this very likely scenario.
Who in the firewall or ISP industries would be interested in critiquing Google’s filter, as long as all important clients are happy? Everyone would be much more interested in promoting the trustworthiness and necessity of the standard, so as to sell and justify their own products.
Also, when you look at the various pieces of legislation being considered to control the net and enforce copyright, a lot of the burden of enforcement and liability is being put on the shoulders of the ISPs. In order to stay in business, and avoid legal sanctions, they will need, at a minimum, to be able to show they are doing ‘due diligence’ in preventing piracy, and in restricting other kind of communications, and access to websites — anything that is deemed to be ‘inappropriate communications’ by our terror-paranoid, war-mongering, and economically-deranged Congress and White House.
Google is ideally positioned in the marketplace to define the standard of due diligence, given its technology and clout, and it is therefore ideally suited from the government’s perspective to be the primary contracting agency for covertly implementing whatever kind of censorship the government might choose to implement at any given time, in order to ‘protect national security’.
Now consider a list like cyberjournal. Let’s assume that the government, or the CIA, or whoever, maintains a list of the top 10,000 sources that are putting out information or viewpoints that the government, or whoever, determines to be ‘unhelpful’. I’m arrogant enough to hope cyberjournal would make that cut, and be in the top 10,000 winners. In fact, I think we’ve already made it.
For example, I subscribe under three different email addresses to cyberjournal and newslog, on all three incarnations of the list (riseup, yahoo, and google). I’m supposed to get nine copies back of everything I post. In fact, I’m lucky to get one back from each source, and on the previous cyberjournal posting, re/the consensus-based model, I didn’t get any back from the google version of the list, and only one copy each from the other two. Who knows how many subscribers did or did not receive the post.
With Google arbitrarily and non-transparently handling censorship for the industry, the capability is clearly there to choke down communication in subtle yet effective ways, with hardly anyone noticing, and with likely complainers being easily marginalized / ignored / further censored.
IP addresses are passé. Censorship filters are more likely based on email addresses, traffic maps of who sends messages to whom, and a list of the identities of ‘unhelpful’ sources. Any particular censorship regime can be activated or deactivated at any time, and have immediate real-time effect.
To pick a perhaps over-dramatic example, suppose a false-flag event were planned, and leaks had gotten out to some of the ‘unhelpful’ sources. Email lists and websites could be selectively and instantly blocked, to preserve the shock-and-awe value of the event. After the event no one would be interested in ‘conspiracy theories’ about how warnings had been censored.
That would be an example of ‘acute’ censorship, a temporary but extreme version. There is also ‘chronic’ censorship, where the ‘invisible chocking’ method would be used, where some unknown percentage of ‘unhelpful’ communications simply get ‘lost in the mail’ with no one being the wiser. There isn’t even any spam folder involved that you can examine.
You wouldn’t need to be a user of any of Google’s branded products in order to be effected by the censorship regime of the day. Any time a message to or from you goes through any kind of firewall, anywhere along the line, the licensed filtering mechanism might just decide to drop that message in the bin, for reasons only our betters know for sure.
In the case of cyberjournal, there are a small number of people, say about 15, that I hear from frequently, and every once in a while a new person writes to me. For all I know, it is only that handful that receive the postings at all. When my own postings routinely don’t echo back to me as they should, I do have cause to legitimately wonder.
Gmail’s visible filter provides an ongoing test bed of new filtering schemes, where the likelihood of complaints is maximized. You may not get a meaningful response to complaints, but it is likely that all complaints are being put to meaningful use as victim-feedback intelligence. Thus filter enhancements can be tested on a large unsuspecting audience, before being deployed on their critical mission of ‘protecting national security’.
best wishes to whomever might be receiving this,
rkm
_________________
subscribe mailto:
2012: Crossroads for Humanity:
Climate science: observations vs. models
related websites:
archives:
Share: