Monday, 27 November 2017

Falling down the rabbit hole

This is a phrase I use often - "So I was looking at this, thinking about how to do that, started reading about the first steps, then Googled around a bit, and 14 hours later I realised how far down the rabbit-hole I had fallen."

I'm writing this after 2 solid days of rabbit hole, and I've decided to describe where I went, as I begin to climb out.

For context, I've spent the last 48 hours totally cut off from humanity, and without any real drive to do anything. I had a few things I needed to get done in the short term, but I'm lucky enough to be able to push most things to one side if something of interest comes along.

So, where did I start - thinking back it was the Hello Internet podcast I was listening to on the way back from Twickenham to Cardiff. An unbelievably long journey at the end of a long day, and meant my brain wasn't 100% in gear. Probably less than 10%. So, when CGP Grey said something along the lines of "do not play this game, you'll lose an entire weekend", I figured, "Well, I have a weekend that needs killing, lets try it out."

And so I discovered "Ultimate Paperclip" which is what I think of as a Cookie Clicker game, due to a game of a similar concept. Reddit refers to it as an incremental game, which I discovered after Googling a "soft lock" issue which was affecting me in-game.

---       7 hours later        ---

Suddenly it's 4am and I achieved exactly nothing all Sunday.

Or did I? Maybe I can persuade myself that this was in any way a valuable experience, and that I've learnt something from it.

For instance, the game plays heavily into the "Dumb AI" way of ruining the world/ known universe by accidentally creating self replicating matter converters.

Similar to the Grey Goo concept, or the Replicators of the Stargate universe, these are inter-planetary machines which use any and all matter to create more of themselves, with the eventual aim of... something. What that actual goal is, is apparently useful for categorising the end results of various nightmarish scenarios.

These have been described in various Science Fiction books and by various great thinkers as von Neumann machines

From these links above, I discovered that these can be benevolent/benign or essentially evil. 

Good:
  • Such as the monoliths which appear in 2001 Space Odyssey (and teach apes how to hit each other with sticks, ostensibly kick starting the race to become upright, higher thinking humans.
or Evil:
  • Such as the "Bersekers" as found in a series of short stories by Fred Saberhagen. I hadn't heard of these stories, but I'll be sure to add them to my bookshelf of Asimov and Philip K Dick et al which I've collected over the years.
Of extra note here is the relevance to Fermi's Paradox, which is also covered in the above article. Essentially, if the universe could be filled with replicating robots, and we've had all of Time for some advanced race to create (intentionally or accidentally) a fleet of single-minded planet eating automatons, where are they? They've had plenty of time to get here.

So, if they do turn up in the skys, or start raining down upon us one day, we get to ask ourselves them: Friend or Foe?

Another concept which turns up in the Ultimate Paperclips game was the concept of Yomi.

Yomi is used as an in-game currency used to buy upgrades to a Stock brokering AI, which is earned by the in-game mini-game "Strategic Modelling", which for most of the game plays itself. 

This was unexplained and confusing when I first discovered it, but I kept clicking, numbers grew higher and higher, so I assumed I was doing things right. Or at least not actually wrong.

Coming back to it later, it became clear it was actually playing hundreds of rounds of the Prisoners Dilemma, a common example when describing Game Theory, coded into something which can win or lose based on different strategies I was unlocking and selecting. 

As I hadn't realised at the time that this is what was going on, it meant I could rediscover it all again now.

In the meantime, I had also discovered this. Another clicker, but in space, and weirdly potato themed. This one had me writing simple JavaScript scripts to calculate worth of items to "buy", with these items being space-based potato-cannons, probes and landers. Because reasons.

But if there's anything worth doing having got this far it's this: "THE EVOLUTION OF TRUST" which is a wonderful interactive explanation of the game theory strategies gathering Yomi earlier. It actively avoids describing the Prisoners Dilemma, instead focusing on societal challenges of friendship and trust in a globally interconnected world, with elements of chaos such as misinformation and accidental actions. It's great.

From there I ended up at data visualisations of trust, which is actually pretty close to at least something similar to bits of my day job. I've been looking for interesting and informative ways of showing social-science related data, normally with a spatial element to it, and this may be useful for that.

So that's kinda neat.

P.S. some nutcase wrote UniversalHotstoppers, which makes more sense if you're a regular HI listener. Rebel flag to bait Grey and Brady too, good work.








No comments:

Post a comment