Quantcast
Channel: Things That Have Caught My Attention
Viewing all articles
Browse latest Browse all 20

s5e01: Fully Connected, Feed Forward

$
0
0

0.0 Station Ident

Last episode looked like as good a place as any to do a season break (even though it might not have been planned). So here we are at the beginning-ish of a new academic year, within spitting distance of Halloween and your usual Western pagan ceremonies.

1.0 Fully Connected, Feed Forward

This is just a dump of the things that are in my head at the moment, with a few notes. Your interest in the following may vary, I make no representation whatsoever about how useful these observations or noticings might be to you.

Mr. Codex of Slate, Star, Codex has written about Predictive Processing[0], a model that seeks to explain how our brains work. I’ve been interested in how our brains work for a long time, so if you’re into stuff like Steven Pinker, Blank Slateism, “yeah, but Sapir-Whorf is *kind* of true, right?” and a whole host of other things because, *if you think about it* the fact that you’re conscious is *really really weird*, then take a look at that link. The super-high level overview of Predictive Processing is that your brain essentially does two things: it constantly generates predictions about the world (and all of those predictions are interrelated). Those predictions are checked against your sensorimotor input at every single layer of cognition. So you’ve got things like predictions about right-angles in nature (probably not) all the way up to IF there is a law enforcement officer in this area THEN certain other agents visible in the area may be more likely to act in a certain way. These predictions like to be correct (the more correct they are, the more we understand the world and can affect it), but sometimes they override what our raw sensorimotor input. If you’re following, this is why optical illusions exist and work: because we have top-down processing or “priors” that kind of cook the books and alter what we perceive, contradicting “unnatural” raw input. For example: faces do not suddenly go concave, so you’re probably seeing another face, not an inside-out-face.

In this way, predictive processing can be abstracted away to a maths problem: how do you generate better predictions about the world based on historical input data, and what do you do when your input data appears to suggest a problem? One way of doing this is by using a piece of maths called a Kalman filter[1].

Remember Beagle 2? It was an ESA Mars Probe that disappointingly for everyone involved failed to properly land on Mars. One of the (armchair, amateur, internet) discussions I saw about what might have happened to the probe after some ESA investigation revealed that (I think) a parachute was released too early suggested that the problem was with sensor fusion.

You’re not controlling Beagle 2’s lander remotely. It’s too far away. It has to decide, by itself, when to release the parachute. We cannot accurately simulate or predict ahead of time what the conditions are going to be. So, what does the lander do? It’s got all these sensors – rotation, altitude, acceleration – and uses them to figure out where it is, where it’s going and what’s going to happen next. We can use this prediction about what will happen next to figure out when the parachute needs to fire to arrest the probe’s speed on landing.

But! What happens if, say, your altimeter inaccurately reports that you’re suddenly 100 meters from the surface instead of, say, 20,000 meters from the surface? *All* of your other data is telling you that it *looks* just like you’re 20,000 meters from the surface. Do you cook the books and pretend you’re still 20km up? Do you ignore the altimeter data? As a piece of math that helps with synthesizing and increasing the accuracy of predictions, a kalman filter would help you figure out that *if* the (broken) altimeter is true *then* our prediction of our lander’s proprioception would look like x (ie: what do we *expect* if we’re at 100m). If data then comes back that makes it look like it’s *more likely* that we’re at 20km than 100m, then we can decide to throw away the altimeter data. But, that may affect the quality of our prediction about the world because one of our inputs has gone screwy! But *at least we know to ignore it now*.

Back to predictive processing. The theory is that your brain acts like everything is normal when your higher-level prediction (this is a policeman) agrees with your lower-level data (it turns out that you can object-recognize an insignia, or something, or you can see a stick or a holster). When that doesn’t happen (e.g. you predict a policeman because there are boots and the right uniform and a hat and a radio in the right place, but instead of a gun in a holster there’s a plush blue Elmo), then that layer freaks out and things fire in a state of surprisal, which (as Mr. Codex notes) is an excellent neuroscience term for what happens when, er, a neuron fires because something happens that does not usually happen (ie: it is surprised).

Now back to mental illness and psychology and cognitive behavioral therapies. One of the trendy ideas right now that’s being taught to patients is that you don’t really have any control over your first set of thoughts. Your brain’s job is to just generate new thoughts. And I’d go further than that, one of the ways that it generates new thoughts is just by noticing associations. Once those associations get high-level enough, they’re less associations and more like “Pizza”, which very quickly might turn into an “I want pizza”.

The idea is your brain does this anyway. We know priming works. In a way, your brain *should* work this way. Stimulus comes in and your brain *reacts* to it. It *reacts* to stimulus by firing (anywhere from heavily to not-at-all) on anything, uh, related to the stimulus.

Thoughts aren’t real. They are just reactions to stimulus.

Now, a quick disclaimer. Over 15 years after being first introduced to it, I’m tentatively opening a copy of The Artist’s Way, which I’m reliably informed *can be* read as totally cult-like, which you might just want to ignore because it turns out you can ignore that and just focus on the results (which are intended to be an unblocking of creativity).

… and I would say that maybe creativity can also be expressed as concepts that are novel. That result in surprisal. That do not fit into the probability distribution of regular, expected events.

Julia Cameron’s position (the author of The Artist’s Way) is that creativity is always there *and* she’s assembled a bunch of quotes supporting the idea that creativity comes from a higher power (ie: ‘the music of Madam Butterfly was dictated to me by God’), but let’s just ignore that (she will accept ‘flow’), but I think at this point that my theory is that they genuinely come from *somewhere else* because they’re not consciously summoned. Subconsciously, creative concepts can be ones generated by your brain *because that’s what your brain’s job is – to react to stimuli and offer up associations*, but that we become blocked when, at an extreme, an openness to absurd creativity is disappeared because it doesn’t predict the world. Creativity generates surprisal and you have to want to be OK with that.

This might be a roundabout way of explaining the folklore that children are preternaturally creative and that we beat it out of them: because we teach them to discount and ignore the surprisal value of dissonant thoughts and concepts.

That’s enough of that for now, anyway.

In completely the other direction, a bunch of random thoughts (ha, did you not read the above?)

* OAuth is your future: A collection of things I designed that didn’t exist in 2012, but probably exist now but for completely different reasons [3]. These are mocked screenshots showing what an OAuth permissions interface might look like for potentially useful (or weakly-dystopic). They range from US DHS asking for FourSquare history to help with a customs form (2017 called, HA, it says), the London Met Police asking to update your Twitter account (I can’t remember why?); Cigna asking for permission to connect to your Foursquare account to get better information on your habits (and a reminder that you might not have a choice due to your employer) (2017: HA again) and UK HMRC (the tax authority) asking for access to LinkedIn to better fill in your employment history on your tax return.

A few thoughts: OAuth presupposes that someone would want to *actively* ask for permission and be explicit about it rather than the (now-current) dark pattern of just hiding the fact that the data is now exfiltrated through a click-through acceptance of TOS. Mobile apps became a trojan horse for personal data (and TOS): you don’t need to ask for permission if it’s in the TOS, so just build something that allows you access (hi, Android and iOS app permissions) to personal data, and then treat it like it’s your (corporate) own.

The other point, of course, is that while it would be *efficient* for governments to programatically import third party data to their own systems, they don’t, they just ask for your username and password when you’re waiting to re-enter the country. Or they suborn Google’s network. You know. Apparently those two things are easier to do. (Ha, no, they’re not *easier* to do, but the (externalized and internal) costs of doing so may be higher than doing it the other way).

* Equifax is a complete tire fire to the extent that there are now think-pieces belatedly realising that the entire US credit report system and infrastructure might not be fit for purpose? This ignores the fact that it is “good enough” for now because patently *nothing bad has happened yet* for the system to have been replaced. The systemic cost of replacement is both too high to bear (too big to fail) for external parties and legacy/entrenched interests are never excited about having to do any more work than necessary, or having their rent-providing positions in society being removed.

OK, so: at what point does a small autonomous group successfully execute a sort of Fight-Club maneuver? The Hollywood Movie plot version of this is setting off an EMP in financial districts, but I think given what we’ve seen of *legacy technology that underpins a lot of western society*, this strategy is super difficult to pull off and super high-risk, right? I mean, a) you have to find one and build one and then b) physically deliver it. You might get caught! Why not just find a zero day in a piece of commercial infrastructure? I mean, it probably doesn’t even have to be a zero day! Equifax’s PINs for managing freezes of credit accounts are reportedly derived from a month-day-hour-minute *timestamp*, and have been for over ten years.

So if you don’t have to do that, how long until someone breaks? And it doesn’t have to be a lot of people, it can be a *small* number of people. You don’t even need to think about a Person of Interest-style plot about releasing a “virus” to “delete” data. All you have to do is reduce confidence or increase churn, right?

So a few questions: at what point are things so bad that sufficiently motivated individuals will act? What are precipating events or conditions for a pseudo Boston Tea Party event “but for personal data”?

It would be *more* difficult for this to happen to a digital native style company (e.g. Facebook, Google, Amazon), but I think orders of magnitude *easier* for it to happen to a legacy, entrenched company. Like Equifax or Experian. Maybe even Tesco or Walmart. That is, a legacy company that has critical data but isn’t smart enough about exposing it securely. Like, what may have happened with Equifax and Apache Struts.

* A workshop about algorithms in society and the myriad of issues involved has a call for submission of papers[4]. I mean, I guess better to think about this at all, but somewhat of a case of horse has bolted from barn and you now live in a world ruled by horses, I hope you like serving your horse masters

* I think my default position on UBI now isn’t that it’s an easier/better way to do social services and that it’s nothing more than an easier/better way to do economic stimulus. I am far too worried that without concentrating on the quality (availability, etc.) of social services that may be purchased with said universal, basic, income, you’re just jerking around about pretending to solve one problem *without actually making things better*. (Note: they might be a bit better, but it seems the presumption is that *removing all government-provided social services and replacing them with just $$$* results in better outcomes, and I’m like: supplied by whom?)

* I saw a good piece (can’t find the link, sorry) whose basic position was a) “Well yes, *in the long run* mechanization, automatisation and industrialisation improved things for people” but also b) “We did have a couple of world wars, hundreds of millions of people died etc. etc.” The argument that “we always find new jobs” is one that works in the abstract but not in the personal (ie: you telling me there *will* be a job in the future for me to do when automation takes away my current job is not great, because this implies there will be a gap). (There is always a gap). Unless you explicitly design for no gap. I don’t see anyone designing for no gap.

* An encouraging – maybe – piece in the New York Times about a former warehouse worker who used to do boring, repetitive, physical work that is now done by something that is (more suited?) to doing that work; i.e. a robot [5]. This is not your usual story because (surprisal!) the former warehouse worker who used to physical labour is now, like, a robot manager? She is slowly moving up beyond the API layer. I mean, this kind of makes sense? Someone who knows what the robot needs to do can now, kind of, look after what the robot should do when the robot throws an exception? Ms. Scott, former physical labourer, now “robot manager”, is happy to provide an economic reporting money quote by saying “For me, it’s the most mentally challenging thing we have here. It’s not repetitive.” This is good, right? Now we just need a few hundred million, if not billions, of these.

[0] Book Review: Surfing Uncertainty | Slate Star Codex
[1] Kalman filter – Wikipedia
[2] Beagle 2 – Wikipedia
[3] OAuth is your future | Flickr
[4] Workshop on Trustworthy Algorithmic Decision-Making
[5] As Amazon Pushes Forward With Robots, Workers Find New Roles – The New York Times

OK. That was some writing. See you for the next episode. Maybe Netflix will pick up a full season this time or something, or we won’t have all the production difficulties that plagued last season.

Dan


Viewing all articles
Browse latest Browse all 20

Latest Images

Trending Articles





Latest Images