This morning I read an article in the Economist about a kid who was born without a cerebellum. Learning to walk, among other things, has proven to be much harder for him than it is for other kids his age. He has had more success than kids who merely have damaged cerebellums. This is partly because other parts of his brain have compensated for the part of his brain that is missing, which can be harder than if it is missing completely.
Another reason why he’s seen success and exceeded the expectations of medical experts is because of his parents. The Economist article illustrates how it is that his parents acted like a cerebellum for him. Repeatedly, they pushed him to stand up when he would have rather crawled. When he totters off a trail while walking through the zoo, they pull him back on. He’s momentarily agitated, not entirely sure why, but then he gets back on track, mentally.
This is an exaggerated case, but what it and other cases like it show is that if a human brain can use other brains to aid its processing power, it will. And that, as humans, we tend to rely on this distributed processing power. Whether this is in a family, a social group, or even in the workplace, I think it is important to understand our own distributed processing. If groups aren’t communicating or are in separate work silos, this will significantly reduce the value they bring to an organization. On the flip side, if these distributed systems are able to interface with each other, we can expect to see considerable value added to innovation supply chains.
We often relish rugged mental individualism, but by ignoring our distributed models of thinking, we decapitate our true potential of generating value within an organization. It is true that we can and should “put our heads together”. My son calls this “Hive Mind”.
If you’re like many IT professionals who’ve had anything to do with large amounts of data, you’ve become immune to the phrase ‘big data’. Mostly because the meaning behind that phrase can vary so wildly.
Processing ‘big data’ can seem out of reach for many organizations. Either because of the costs in infrastructure required to establish a foothold on this front or because of a lack organizational expertise. And since the meaning of ‘big data’ can vary so much, you may find that you’re doing ‘big data’ work and then ask yourself, “Is this big data?” Or an observer can suggest that something is ‘big data’ when you know full well that it isn’t.
With my own background in data, I’m ever curious about what’s out there that can make the threshold into ‘big data’ seem less insurmountable. Also, I’m interested in the security considerations around these solutions.
In the last week or so, I’ve gotten more familiar with AWS s3 buckets and a querying service called Amazon Athena. Here’s the truly amazing thing. You can simply drop files in an s3 bucket and query them straight from Amazon Athena. (There are just a couple steps to go through, but they are mostly trivial.) And for the most part, there’s not much of a limit for how much data you can query and analyze. You can scan 1tb of data for $5. What? That’s right. And you didn’t have to set up servers, database platforms, or any of that. I’ll be exploring Amazon Athena more and more over the coming weeks. If you have an interest in this sort of thing, I suggest you do the same.
One note: Google has something similar called BigQuery, so that might be worth a look as well. I’ve explored BigQuery briefly but I keep coming back to various AWS services since they seem to be holding strong as a dominant leader in emerging cloud technologies. But as well all know, the emerging technology landscape can change very quickly!
For some time, I’ve been interested in learning about the Raspberry Pi. It’s little a bare bones computer that packs a big punch. And to top it off, it’s quite affordable. Through work I heard about a way to use a Raspberry Pi for an OS called Retropie. Retropie is an emulation platform that let’s you play scores of old games…if you have the digital files for them, of which many can be found with the help of Google.
I’m not much into modern video games, (as in games from the last 20 years or so), but I did play NES games back when I was in jr. high and high school. And I do still have my original NES, but it has a number of issues that make it less than reliable for playing. My kids are interested in the older games because I’ll actually join them when they play. And, quite frankly, because the older games are super fun to play and easy to learn.
Anyway, Retropie is a great way to learn how to use and get familiar with the Raspberry Pi. You simply, burn the Retropie image on a micro SD card, pop it in the micro SD card slot and boot it up! There are a few other things you need to know, but that’s the gist of it. Get a few games, a controller or two, have a monitor with an HDMI plug-in handy and you’re good to go. That’s a bit of an over-simplification, but please do explore Retropie and Raspberry Pi if you’re at all interested in this sort of thing and are looking for a good way to get familiar with the Raspberry Pi world.
Here are a couple key links:
These days efforts to revamp company culture are in vogue. I’m going to attempt to articulate what I see as a connection between machine learning and efforts to change company culture. Stay with me here a bit because the analogy doesn’t show up until the fourth paragraph and I need to share a little bit of background first. 🙂
One group leading the charge to change company culture is Partners in Leadership (https://www.partnersinleadership.com). They use a tool that identifies the following flow toward changing results. It’s a pyramid that moves from experiences to results in the following steps: EXPERIENCES >> BELIEFS >> ACTIONS >> RESULTS. According to the model, you start with the results you want to see as an organization and then move backward until you’ve arrived at the experiences that you need to create. The thinking is that experiences shape beliefs, which shape actions, which shape results. They maintain that you cannot simply skip ahead results until the rest of the house is in order first.
As for the experiences, they actually need to be high quality experiences. Partners in Leadership breaks these experiences into four types (big paraphrase here): 1) Easy to interpret, 2) Needing work to interpret, 3) Very little meaning, so there isn’t much to interpret, and 4) Experiences that, well, kind of did the opposite of what they were intended to do.
Now it is time for the machine learning analogy! Boiled down, machine learning is essentially learning from experiences (data) in order to shape beliefs (trained statistical models). These beliefs/models turn into actions (acting on the outcome of a model), which leads to results. Critical to this process is the experiential data and its interpretation (the model). We train our models by feeding data (experiences) into them. Why am I making this connection? Because organizations are really struggling to understand machine learning. Why not piggy back off of something that they’re learning already? Results from machine learning algorithms are no different results gleaned from an organizations’ cultural change initiatives. What data do you have that you can use to shape your statistical models? Which actions do you need to take to get results? You can change your culture and understand machine learning at the same time!