I'm Not A Hacker
I’m not a hacker.
I would never identify as one. I have neither the skills nor the experience to call myself a hacker. Phishing? I think it’s a typo. Man in the middle? I thought that was a TV show. SQL Injection? Didn’t know SQL was curable…
I do get bored though. And when I get bored, I find myself figuring out ‘ways’ to bypass ‘stuff’.
Be forewarned that this isn’t a technical post - nor is it an actionable post - this is just me ranting at the sad state of hardware, firmware, and software security and the fact that the people responsible for maintaining these systems are either incapable of security, or they just don’t care - both of which are extremely bad.
Terminology: When I refer to ‘hackers’, I’m not talking about those people who fiddle around with software or electronics and think they’re something special. I mean Mr.Robot kinda hacker.
This post arises after the deletion of a 3000 word article going into depth about how I got bored one day while stuck somewhere and decided to … ahem… ‘bypass’ a company’s wifi paywall. This was the first time I’d done something like this and I wanted to write about the experience because I thought it was pretty interesting to walk through the steps that it would take a novice (me) to figure out how to do something like this.
All in all, including the time it took to write the post, I had spent about 1.5-2 hours on this. If I had to re-do it right now, it would probably take 15 minutes.
From a computer security point-of-view, this worries the hell out of me, because it means a real hacker would take something along the lines of 5 minutes to do it manually the first time, and would probably just write a script to do it automatically in seconds.
I had no malicious intent, I was just so bored… What if that wasn’t the case? What could someone do if they weren’t just bored?
I decided to not post what I wanted to (even though all the information is already freely available in Reddit and the like), until I get a chance to talk to the company about the security gap.
Companies Don’t Care
I’m emailing that company about this issue, but the truth of the matter is that I don’t expect to hear back from them. That’s the unfortunate trend that I’ve seen in most companies I’ve interacted with. I can email with a problem and a solution, I won’t hear back, and a year later - nothing will have been done.
There’s a question that comes out of this, which is: Is posting security flaws online unethical? What’s the right amount of time to give a company to patch something like this before posting it (redacting their name or not).
A lot of the security flaws I find are because I’m building/coding something, find a possible security flaw, patch it, and then test if other similar products/platforms have it. So, the issues I find are super specific and usually apply to certain companies. Nothing like Heartbleed or other bugs that would cripple the fabric of the internet…
For example, 6 months ago, I discovered a potential way to bypass authentication for certain edge cases on a very popular app/web product for a company with > 1 million active users. I emailed them about it, and - surprise surprise - never heard back.
This ‘bug’ is an implementation-specific edge case for social logins that lots of websites use (I tested on 2-3 other major websites). Is it ethical to post this very common implementation flaw? It’s a very narrow set of use cases where authentication could be bypassed, but I’ve already proven it happens on some very popular platforms…
It’s Not Just The Internet
I posted those because anyone who makes a BLE product SHOULD know to do basic encryption mechanisms like that already; and I also didn’t post exactly how to bypass the pseudo-security features (although, again, anyone who has worked with BLE before would be able to figure it out in hours).
Generally, security flaws like that come out because startups who have no idea what they’re doing want to push a product to the market as fast as possible - without consideration of the implications.
It’s Not Just The Internet of Things Either
In another instance of boredom, in our university dorms, we rigged the front door buzzer to automatically unlock the door when our room was buzzed. Was just about getting a relay hooked up in the right location, and that was that.
Was that bad? Clever? Dangerous? All of the above?
How did the company who installed the door locking mechanism not imagine people would do that? Or did they not care? Or did they decide it was too much work to stop something like that?
I could add about 10 cents of components to a PCB and patch that problem.
Pseudo Security in Apps
I downloaded an app which claimed to turn my tablet into a Kiosk and prevent users from using anything except what the owner wanted to expose. In general, this was to setup a ghetto Kiosk (instead of doing something like what I wrote about here). It has hundreds of thousands of downloads and high ratings. It took 4 minutes to get myself out of the app without even using a rooted or debuggable tablet.
Didn’t hear a word from the developers about it (solutions to the problem included).
Recently, I downloaded a very popular app and reverse engineered it to figure out a password it used to unlock certain functionality. Took 1 hour. Still no replies.
The State of Security Sucks
The biggest takeaway here that I sincerely want to convey is that I don’t think I’m good at this, it’s just that the state of software and hardware security is THAT BAD!
Genuinely, I am NOT a hacker, I am NOT good at this. I get bored and fiddle around, and because companies/developers don’t pay enough attention to security, it’s easy for anyone to do this. Especially if you have Google by your side.
Maybe the 10 or so times I’ve reported security flaws, I just ran into companies who were particularly bad, but maybe that’s not the state of the average company? I doubt it.
I think incredibly highly of companies who have the balls to hire security consultants and offer bug bounties, which is them saying “We know we’re not perfect, help us become better and more secure”. I think the exact opposite of companies who stick to their “security through hope” methodology.