Two weekends ago I finished reading “Tribe of Hackers: Cybersecurity Advice from the Best Hackers in the World”. (Please read previous blog entry to learn more.) I was amazed at how many of “Tribe of Hackers” contributors recommended an old book, “The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage,” which was written by Clifford Stoll in 1989.
The story actually begins at Lawrence Berkeley National Laboratory in 1986. I won’t go into too many details about the setting or the time. In computer years, it was ages ago. So my question: “How could such an old book about tracking down a hacker be so routinely recommended by a slew of highly knowledgeable and well-respected info sec professionals?”
Turns out cybersecurity hasn’t changed much. In “The Cuckoo’s Egg,” the hacker who is being tracked by Stoll, an astronomer, is aided by of the following: 1) default credentials, 2) processes that run as root, but shouldn’t, 3) well-known vulnerabilities, 4) the fact that folks can be fooled into entering their credentials into fake sites, 5) the desire of organizations to not share information, 6) the fact that various US agencies described this sort of attack as not their ‘bailiwick’, 7) the fact that various agencies don’t have the expertise to fully comprehend the risk to their data and network infrastructures, and 8) that organizations could not possibly imagine someone actually penetrating their ‘high security’ environments. I’m sure I’m missing a few, but you get the idea.
Besides being a great old book, published when I was a curious, modem tapping, BBS surfing adolescent, it’s an excellent primer on the foundations of modern cybersecurity. Sure, the technology has changed, but fundamentals haven’t moved an inch. Maybe all cybersecurity professionals have heard of this book except for me, but if you haven’t, consider reading it. Even if you’re not after the education, it’s wonderfully entertaining.
I’m pretty late into to the API game. Recently I was on a call with a handful of security engineers and they explained that they couldn’t afford to have their people staring at console screens any more. Instead, they rely almost entirely on API’s to automate and streamline their work. I’ve been hearing about API development forever but I’d not gotten past the first hurdle: how to start. My answer to this is Postman.
Currently, I’m only using a free account. I’m in learning mode, but as I move toward doing more work with API’s in the future, I’ll absolutely be using Postman to test and verify my efforts. It’s also a great introduction in the security advantages and disadvantages of using API’s.
Anyone else who has a desire to dig into API’s and consider what they can do to add value to your work, try Postman. And don’t forget to check out a few of their tutorial videos.
Not long ago I did one of those “Strengths Finder” assessments put out by the folks at gallupstrengthscenter.com. At the top of my “strengths” list was the designation “Learner”. It essentially confirmed what I already almost knew — that I enjoy learning or getting to a point of understanding on a variety of topics.
Recently a colleague at work recommended that I consider taking at look at the 2600 Magazine. So I did. I read the Kindle version of the most recent edition. What I really enjoy about reading the Hacker Quarterly is that it is filled with articles written by people who love to learn and understand things, specifically related to computers and technology.
Also, as someone who works in cyber security, it is exceedingly helpful for me to understand the types of vulnerabilities that are written about in Hacker Quarterly articles. For example, I read an article by an individual who was able to ‘investigate’ a very larger number of routers in Malaysia. Initially, he had resource constraints, but discovered that by using a Spot Instance at AWS he could considerably broaden his reach at a very low cost: ten dollars. I’ll be seeking to understand these AWS Spot Instances and the impact they may have on the security of organizations in the future.
By and large the spirit of the “Hacker Quarterly” is centered around learning and understanding. And the culture of the group is such that criminal activity is frowned upon, though they do skirt the edges of legality from time to time. To have a window into this world is marvelous. I’m now reading through a whole ‘digest’ of issues from the past year. And if you’re a “Learner” like me, I suggest you do the same. Here’s their website: https://www.2600.com/
Perpetual learning is paramount for folks in any profession, but I’ve found that for individuals who work in cyber security it is absolutely critical. A significant part of the work I do involves knowing what risks lurk both in the wild (and internally) that can stand in the way of an organization’s future success. Staying up with these risks, mitigation techniques, and controls is vital.
There are all types of learning that help new concepts find a home in my brain. One comprehensive learning experience that I recommend for anyone in cyber security is an event put out each year by SANS, which is an organization that trains cyber security professionals. The event is called the SANS Holiday Hack Challenge.
This year 9-year-old son helped me in ways that blew my mind. His little mind went after small details that I thought were insignificant that turned out to be a pretty big deal. He was very excited by what he was able to uncover…and so was I.
The SANS Holiday Hack challenge introduces cyber security professionals and pen-testers to new technologies and opens their minds to risks and mitigation techniques that they had not previously considered. I greatly enjoy their ‘terminal challenges’ which provide hints toward solving objectives. Never before had I decrypted http2 traffic using Wireshark and SSL keys. So awesome! Here’s the link for this years’ challenge which has been a wild ride for me, to say the least: https://www.holidayhackchallenge.com/2018/.
Stop in and poke around. Solve a terminal challenge or two then put it on your holiday to-do list for next year. You won’t regret it!
For Christmas we got our son an Arduino Uno starter kit. It’s not officially and Arduino, though. The hardware specifications are the same, but it is made by a company called Elegoo. What we purchased was the “Complete Starter Kit”. I highly recommend it. So far we’ve made prototypes for the following: 1) blinking LED lights, 2) joystick controlling a servo motor, and 3) an ultrasonic sensor that tells us how far objects are from it. There have been a few other things, but those are what come to mind as I write.
Besides being extremely fun an interesting, these prototypes foster a new understanding about all the electronic things we use and how they may be wired. We could have gotten a kit for a robot or a remote controlled car, but testing out a range of sensors seems to broaden our view of what’s possible. If we decide on a full project, we’ll have a much better idea of what we’ll need and whether it will work.
Also, as a side note, since I’m using my Chromebook for these project I’m not using a locally installed IDE. Instead, I’m paying $1 a month to use the cloud service provided by Arduino for building sketches. So far it has worked flawlessly. Though ChromeOS does have a linux sandbox now. I’m going to see if I can install it that way, too.
If you’re like many IT professionals who’ve had anything to do with large amounts of data, you’ve become immune to the phrase ‘big data’. Mostly because the meaning behind that phrase can vary so wildly.
Processing ‘big data’ can seem out of reach for many organizations. Either because of the costs in infrastructure required to establish a foothold on this front or because of a lack organizational expertise. And since the meaning of ‘big data’ can vary so much, you may find that you’re doing ‘big data’ work and then ask yourself, “Is this big data?” Or an observer can suggest that something is ‘big data’ when you know full well that it isn’t.
With my own background in data, I’m ever curious about what’s out there that can make the threshold into ‘big data’ seem less insurmountable. Also, I’m interested in the security considerations around these solutions.
In the last week or so, I’ve gotten more familiar with AWS s3 buckets and a querying service called Amazon Athena. Here’s the truly amazing thing. You can simply drop files in an s3 bucket and query them straight from Amazon Athena. (There are just a couple steps to go through, but they are mostly trivial.) And for the most part, there’s not much of a limit for how much data you can query and analyze. You can scan 1tb of data for $5. What? That’s right. And you didn’t have to set up servers, database platforms, or any of that. I’ll be exploring Amazon Athena more and more over the coming weeks. If you have an interest in this sort of thing, I suggest you do the same.
One note: Google has something similar called BigQuery, so that might be worth a look as well. I’ve explored BigQuery briefly but I keep coming back to various AWS services since they seem to be holding strong as a dominant leader in emerging cloud technologies. But as well all know, the emerging technology landscape can change very quickly!
For some time, I’ve been interested in learning about the Raspberry Pi. It’s little a bare bones computer that packs a big punch. And to top it off, it’s quite affordable. Through work I heard about a way to use a Raspberry Pi for an OS called Retropie. Retropie is an emulation platform that let’s you play scores of old games…if you have the digital files for them, of which many can be found with the help of Google.
I’m not much into modern video games, (as in games from the last 20 years or so), but I did play NES games back when I was in jr. high and high school. And I do still have my original NES, but it has a number of issues that make it less than reliable for playing. My kids are interested in the older games because I’ll actually join them when they play. And, quite frankly, because the older games are super fun to play and easy to learn.
Anyway, Retropie is a great way to learn how to use and get familiar with the Raspberry Pi. You simply, burn the Retropie image on a micro SD card, pop it in the micro SD card slot and boot it up! There are a few other things you need to know, but that’s the gist of it. Get a few games, a controller or two, have a monitor with an HDMI plug-in handy and you’re good to go. That’s a bit of an over-simplification, but please do explore Retropie and Raspberry Pi if you’re at all interested in this sort of thing and are looking for a good way to get familiar with the Raspberry Pi world.
These days efforts to revamp company culture are in vogue. I’m going to attempt to articulate what I see as a connection between machine learning and efforts to change company culture. Stay with me here a bit because the analogy doesn’t show up until the fourth paragraph and I need to share a little bit of background first. 🙂
One group leading the charge to change company culture is Partners in Leadership (https://www.partnersinleadership.com). They use a tool that identifies the following flow toward changing results. It’s a pyramid that moves from experiences to results in the following steps: EXPERIENCES >> BELIEFS >> ACTIONS >> RESULTS. According to the model, you start with the results you want to see as an organization and then move backward until you’ve arrived at the experiences that you need to create. The thinking is that experiences shape beliefs, which shape actions, which shape results. They maintain that you cannot simply skip ahead results until the rest of the house is in order first.
As for the experiences, they actually need to be high quality experiences. Partners in Leadership breaks these experiences into four types (big paraphrase here): 1) Easy to interpret, 2) Needing work to interpret, 3) Very little meaning, so there isn’t much to interpret, and 4) Experiences that, well, kind of did the opposite of what they were intended to do.
Now it is time for the machine learning analogy! Boiled down, machine learning is essentially learning from experiences (data) in order to shape beliefs (trained statistical models). These beliefs/models turn into actions (acting on the outcome of a model), which leads to results. Critical to this process is the experiential data and its interpretation (the model). We train our models by feeding data (experiences) into them. Why am I making this connection? Because organizations are really struggling to understand machine learning. Why not piggy back off of something that they’re learning already? Results from machine learning algorithms are no different results gleaned from an organizations’ cultural change initiatives. What data do you have that you can use to shape your statistical models? Which actions do you need to take to get results? You can change your culture and understand machine learning at the same time!
How much of the world’s IT infrastructure is in the cloud now and much of it will be in the cloud in five years? I’m sure there is nearly solid data somewhere to answer those questions. Regardless, it is happening and it won’t be long until most IT infrastructure is in the cloud.
Oddly, though, in my conversations with other IT professionals, it seems like we’re finding we’ve arrived late to the party. With the advent of “the cloud” organizations are finding that there are all sorts of solutions out there that don’t necessarily need the involvement of traditional IT. In much of the IT world, our perception is that this process is more gradual when in fact it is accelerating.
So the real question is not whether “the cloud” is coming, but whether we see it coming. If we want to make sure cloud implementation is done properly and doesn’t completely hose our respective organizations, we must learn as much as we can in a very short period of time.
Nearly every day I find myself reading about cloud security risks right along side incredible cloud solutions for problems that would normally be much harder to solve. At the same time, many cloud solutions create problems that we’ve never seen before. With the flip of a switch something private can become public: see S3 buckets. And it isn’t so much that the cloud is insecure, but how we connect to the cloud, whether this is through our API infrastructure or open ports that maybe shouldn’t be…open. The only answer I have for all of this is that we need to learn, learn, learn, learn…and fast.
I’ve been a Linux user at home for quite some time. We were a Windows family very early on but ran into issues with viruses. I resurrected a super old laptop and put Lubuntu on it and gave it to my wife. It worked well for years. After a while, one thing or another wouldn’t work, so on a whim I got her a Chromebook. Nearly everything she does is online, and she’d already started using Google docs when on the Lubuntu PC. As a result, the transition was peachy! After watching her tote that thing around the house for a year or so, and noticing how carelessly she worried about charging the battery or booting it up, I decided I needed one too!
It’s done quite well for me. Occasionally, I have to jump over to my Ubuntu desktop for more high-powered activity, but 80% of my computing at home is on the Chromebook. This experience and the evolution of computing as it moves into the cloud is leading me to believe that the days of everyone running around with what is essentially their own personal server, are numbered. I’m guessing in about five to eight years, computing will be cloud focused even more than it is now and people won’t really own traditional laptops any more.