Privacy

"It made me feel like I was eavesdropping"

The workers with knowledge of the project said it was initially intended to improve the chatbot’s voice capabilities, and the number of sexual or vulgar requests quickly turned it into an NSFW project.

“It was supposed to be a project geared toward teaching Grok how to carry on an adult conversation,” one of the workers said. “Those conversations can be sexual, but they’re not designed to be solely sexual.”

“I listened to some pretty disturbing things. It was basically audio porn. Some of the things people asked for were things I wouldn’t even feel comfortable putting in Google,” said a former employee who worked on Project Rabbit.

“It made me feel like I was eavesdropping,” they added, “like people clearly didn’t understand that there’s people on the other end listening to these things.”

A rude technology deserves a rude response

Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright, the way AI tech companies further the patterns of empire, how it’s a con that enables fraud and disinformation and harassment and surveillance, the exploitation of workers, as an excuse to fire workers and de-skill work, how they don’t actually reason and probability and association are inadequate to the goal of intelligence, how people think it makes them faster when it makes them slower, how it is inherently mediocre and fundamentally conservative, how it is at its core a fascist technology rooted in the ideology of supremacy, defined not by its technical features but by its political ones.

[…]

I am here to be rude, because this is a rude technology, and it deserves a rude response. Miyazaki said, “I strongly feel that this is an insult to life itself.” Scam Altman said we can surround the solar system with a Dyson Sphere to hold data centers. Miyazaki is right, and Altman is wrong. Miyazaki tells stories that blend the ordinary and the fantastic in ways people find deeply meaningful. Altman tells lies for money.

Don't fall into the well again

Now that we’ve had a little more time since the initial shock and awe of the Elon Musk purchase of Twitter, social networks have had a chance to build up interest as potential next networks to replace Twitter.

We have been given a rare chance, as a culture of social media addicts, to break a cycle we keep repeating and move to something that isn’t just owned by somebody who stands to make billions of dollars from your decision.

Unfortunately, we’re in real risk of falling into the same well as before, convinced that the open choice is somehow too complicated. That has meant that two separate social networks, Post and Hive, have driven interest from Twitter’s failings, away from open networks like Mastodon or the broader Fediverse[.]

Don’t fall into the well again.

Is Google Always Listening?

Jarvis Johnson explains that big companies like Google and Facebook don’t need to listen in on you via your phone/tablet, because they already collect so much data on you that they can guess what ads might be relevant on that alone. I love the critical thinking aspect to this - it’s easy to oversimplify complex systems, because our brains aren’t good at understanding that many variables, but reality is not required to bend to our beliefs.

I have nothing to hide

I know you are not a terrorist but still you give privacy for understood in many aspects of your daily physical life. You expect it every time you go to the bathroom and close the door, you expect it when you go to the doctor or you confide to a close friend.

[…]

Everyone has things that keep to themselves, things that only say to a special person, things that can be shared with close friends or family, things they talk only to a doctor and so on and so forth.

If in real life we have so many layers, why don’t we expect the same level of privacy on internet? We are giving to Facebook, Google, Linkedin and the others more information about ourselves than we give to our SO or even to ourselves (remember Google is investing in DNA mapping).

Browse Against the Machine

[T]he web looks more and more like a feudal system, where the geography of the web has been partitioned off by the Frightful Five. Google, Facebook, Microsoft, Apple and Amazon are our lord and protectors, exacting a royal sum for our online behaviors. We’re the serfs and tenants, providing homage inside their walled fortresses. Noble upstarts are erased or subsumed under their existing order.

Disqus is a performance and privacy nightmare

Relevant points [of disabling Disqus] are:

  • Load-time goes from 6 seconds to 2 seconds.
  • There are 105 network requests vs. 16.
  • There are a lot of non-relevant requests going through to networks that will be tracking your movements.

Among the networks you can find:

  • disqus.com - Obviously!
  • google-analytics.com - Multiple requests; no idea who’s capturing your movements.
  • connect.facebook.net - If you’re logged into Facebook, they know you visit this site.
  • accounts.google.com - Google will also map your visits to this site with any of your Google accounts.
  • pippio.com - LiveRamp identify mapping for harvesting your details for commercial gain.
  • bluekai.com - Identity tracking for marketing campaigns.
  • crwdcntrl.net - Pretty suspect site listed as referenced by viruses and spyware.
  • exelator.com - More identity and movement tracking site which even has a virus named after it!
  • doubleclick.net - We all know this one: ad services and movement tracking, owned by Google.
  • tag.apxlv.net - Very shady and tricky to pin-point an owner as they obsfuscate their domain (I didn’t even know this was a thing!). Adds a tracking pixel to your site.
  • adnxs.com - More tracking garbage, albeit slightly more prolific.
  • adsymptotic.com - Advertising and tracking that suppposedly uses machine learning.
  • rlcdn.com - Obsfuscated advertising/tracking from Rapleaf.
  • adbrn.com - “Deliver a personalized customer journey across devices, channels and platforms with Adbrain customer ID mapping technology.”
  • nexac.com - Oracle’s Datalogix, their own tracking and behavioural pattern rubbish.
  • tapad.com - OK, I cant’t be bothered to search to look this up anymore.
  • liadm.com - More? Oh, ok, then…
  • sohern.com - Yup. Tracking.
  • demdex.net - Tracking. From Adobe.
  • bidswitch.net - I’ll give you one guess…
  • agkn.com - …
  • mathtag.com - Curious name, maybe it’s… no. It’s tracking you.

I can’t visit many of these sites because I have them blocked in uBlock Origin so information was gleaned from google crawl results of the webpages and 3rd parties. Needless to say, it’s a pretty disgusting insight into how certain free products turn you into the product. What’s more worrying are the services that go to lengths to hide who they are and what their purposes are for tracking your movements.

Build a Better Monster: Morality, Machine Learning, and Mass Surveillance

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we’re good people. We like freedom. How could we have built tools that subvert it?

[…]

The learning algorithms have no ethics or boundaries. There’s no slot in the algorithm that says “insert moral compass here”, or any way to tell them that certain inferences are forbidden because they would be wrong. In applying them to human beings, we leave ourselves open to unpleasant surprises.

The issue is not just intentional abuse (by trainers feeding skewed data into algorithms to affect the outcome), or unexamined bias that creeps in with in our training data, but the fundamental non-humanity of these algorithms.

[…]

So what happens when these tools for maximizing clicks and engagement creep into the political sphere?

This is a delicate question! If you concede that they work just as well for politics as for commerce, you’re inviting government oversight. If you claim they don’t work well at all, you’re telling advertisers they’re wasting their money.

Facebook and Google have tied themselves into pretzels over this. The idea that these mechanisms of persuasion could be politically useful, and especially that they might be more useful to one side than the other, violates cherished beliefs about the “apolitical” tech industry.

[…]

One problem is that any system trying to maximize engagement will try to push users towards the fringes. You can prove this to yourself by opening YouTube in an incognito browser (so that you start with a blank slate), and clicking recommended links on any video with political content. When I tried this experiment last night, within five clicks I went from a news item about demonstrators clashing in Berkeley to a conspiracy site claiming Trump was planning WWIII with North Korea, and another exposing FEMA’s plans for genocide.

This pull to the fringes doesn’t happen if you click on a cute animal story. In that case, you just get more cute animals (an experiment I also recommend trying). But the algorithms have learned that users interested in politics respond more if they’re provoked more, so they provoke. Nobody programmed the behavior into the algorithm; it made a correct observation about human nature and acted on it.

[…]

But even though we’re likely to fail, all we can do is try. Good intentions are not going to make these structural problems go away. Talking about them is not going to fix them.

We have to do something.