The New Oil

Practical privacy and simple cybersecurity.
TheNewOil.org

For some people, like myself, jumping into something new is exhilarating and you sink yourself into it 100%. This is where I found myself a few years ago when I first got into privacy, and where you might find yourself. In time, I eventually dialed back a bit and relaxed as I got more comfortable with this stuff, figured out what did and didn’t work, and made convenience adjustments as my threat model allowed. Regardless of where you end up settling on the privacy spectrum, it can sometimes be difficult interacting with people who aren’t privacy-minded. It can be hard to explain why you don’t have a Facebook, why you don’t want them posting your picture online, or asking loved ones to use an encrypted messenger. So this week I wanted to talk about how to interact with non-privacy-minded people. Specifically, I want to talk about how to decide where to draw the line and demand who does or doesn’t need to be using privacy techniques.

Preaching Privacy

Let’s go ahead and start with the hard truth. You can’t evangelize privacy to people like a pastor on the street. Most people just don’t care and beating them over the head with it repeatedly isn’t going to give them Stockholm Syndrome for your message. Furthermore, people aren’t logical. I’ve seen ridiculous suggestions like to hack your friends, start browsing their phone without asking, or start recording them. Nobody is going to go “wow, you’re right, I’m being a hypocrite and I do value my privacy.” They’re going to call you an asshole and stop talking to you. I’ve personally found that the best strategy is just to live your life, make your opinions known respectfully, and let people come to you. A few months ago I wrote a blog post about Ron and his dating conundrum. Ron wasn’t actually my friend, he was a friend of my partner. He had a problem, and my partner knew that I was the most qualified person she knew to solve it. When your friends have problems, they’ll know they come to you to ask. That’s when you can offer solutions. And it doesn’t hurt to ask your friends “hey, are you familiar with password managers?” and offer some advice, but don’t repeatedly bash them with it. They’ll move at their own pace, and quite frankly their security isn’t your problem.

Levels of Closeness

It’s important to remember that not everyone in your life has the same level of closeness with you. Your significant other is closer to you than your coworkers. Your family is closer than your friends (for most people). And your friends are closer than your barber. This should be an important factor when you decide how to deal with people who aren’t privacy-minded. Do you need your significant other using an encrypted messenger as you text throughout the day? Yes. Especially if they like to send you risque stuff and you use company WiFi. Do you need your favorite barber to use encrypted messaging? Probably not. They probably don’t even need your phone number. It’s important to pick your battles.

Context of Power

Do your coworkers need to use encrypted messengers? This becomes a gray area. I mentioned once that when the pandemic started in the US, I asked my boss if we could not use Zoom but I also realized that we have to do what’s best for the company. My coworkers – and my boss – are used to me being tin-foil hat crazy. They don’t mind me suggesting things like Privacy.com, Bitwarden, or Signal. But I also realize that I have no power there. I’m not the IT guy. I’m not the VP or COO. I’m at the bottom of the ladder, and I keep that in check whenever I suggest anything. My coworkers and I chat fairly frequently outside of work – we send each other memes or articles we found interesting and stuff like that – so I don’t think there would be any issue if I said “hey, could we move this conversation to Signal” or “Can we set up PGP keys for stuff like this that isn’t company-related?” I don’t even think anyone would really complain if I suggested setting up PGP keys for inter-office email and opened that option to the outside world (though, for the record, I highly doubt anyone would be on board). But the point is, I realize that when it comes to company policy I have no power, and while I am free to voice my opinion I have to realize that it is not my way or the highway.

Additional Context

I think those two things are the biggest deciding factors when deciding where to draw your privacy line, but there is additional context. When dealing with medical or financial professionals, I don’t see anything wrong in seeking a person who is willing to use encrypted email. I also think age and tech-savvy plays a factor. I mentioned in a prior blog that I was able to switch my mother to ProtonMail by offering to set it up for her and let her take over, and she has been using it ever since. My grandmother, on the other hand, is in her 90s. I love her and I mean no disrespect, but she has one foot in the grave. We also speak about twice a year. I see absolutely no value in fighting over her about using ProtonMail, Signal, or anything else. Think about that: I just said you should get your doctor – who you probably see once or twice a year if you’re healthy – to use encryption but not your grandma. Obviously this varies from person to person. For some people, their grandparents raised them as if they were the actual parents, and those same grandparents are fairly tech competent and can be trained to use encryption reasonably. The point is to measure things with context. It’s impossible to draw a universal line in the sand and say “family MUST use encryption while strangers you only talk to once a month don’t have to.” What you’re communicating, frequency, and audience all matter.

I often see people ask “how do I get my family/friends/significant other/coworkers/etc to care about privacy,” but I rarely see anyone ask “should you get them to care at all?” It’s an important question. Before you ask how to convince them, you should start by asking if you even need to. Now obviously, I would prefer a world where everyone defaults to encryption whenever possible, but that’s not the world we live in right now and I have to pick my battles. It’s just like threat modeling: obviously it’d be nice if we could protect against all threats, but first you have to ask what threats are actually pressing and need to be addressed first and which ones can wait (if be dealt with at all).

I’m sorry this blog was a little scattered, I try to keep my blogs somewhere between 1,000 and 1,500 words and this topic is huge and complex. As I said, I can’t simply say “here’s when you should and shouldn’t demand privacy from others.” It’s almost all one big gray area that varies from person to person. But I hope I’ve at least given you some thoughts and tools to figure out where they gray area ends and the black and white lie for you.

Tech changes fast, so be sure to check TheNewOil.org for the latest recommendations on tools, services, settings, and more. You can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...

Lately students have been returning to school, but as I’m sure I don’t need to tell my readers, things are a little different this year. Many schools are looking to online or hybrid classes as a way to protect students and staff from the still-ongoing pandemic. Unfortunately, schools are often underfunded. Unfortunately, Google has stepped in and offered Chromebooks at low prices to schools to offset this problem. Personally, I don’t blame the schools. Teaching is a difficult thing, and the US federal government certainly isn’t making that problem any easier. Schools are doing their best. But I am pretty upset at Google. We all know that Google is one of the largest and most aggressive privacy offenders, which means that there is no doubt in my mind that Google has an ulterior motive with their charitable donation: they want to get kids rooted in the Google ecosystem early so they stay there. Income stream for life. Sadly this isn’t much of a conspiracy theory, it’s basically a given in the tech community (source, source, source, source). As students have begun to return to school, I’ve seen a lot of questions – and even had a few directed at me – regarding the privacy implications of these devices, including what’s possible and how to use them as privately as possible. So this week, I’m going to discuss that.

What Can It See?

The most common question I get/see regarding Chromebooks and privacy is what else they can see on the network. If I get issued a Chromebook and use it at home, can the school/Google see other devices and network traffic? The short answer is no. Technically it is possible, but once again schools are highly underfunded and they really have no motivation and nothing to gain from such intrusive programs. I have no doubt that the school can see almost everything you do on the device itself, but that’s probably where the school’s eyes end.

Google, on the other hand, is a bit more invasive but not as invasive as some might think. Without having any sources to back me up, but based on what I know about how surveillance capitalism currently works, Google can see everything the school can, as well as network information. For example, Google can probably see your SSID, information about your network (such as password encryption protocol, router info, IP address, and more), and I wouldn’t be surprised if Google can also see what other devices are on that network, such as a Roku TV, a Windows 10 machine, an iPhone, etc. However, as for the actual traffic, I would be surprise if Google sees the traffic from those other devices. The technical ability exists, but I suspect Google’s tentacles on every type of device are already so deep that they gain nothing from that kind of spying. It’s easier just to have each device report individually and connect the dots on Google’s end. After all, if you have two devices reporting from the same IP, then obviously they’re on the same network, and you can be much more invasive tracking the device locally than spying from the router.

Best Practices

In a moment, I’m going to list a bunch of settings I recommend changing, but first let’s talk about how to use your Chromebook in the most privacy-respecting and secure way possible. It should go without saying that you should consider everything you do on the device compromised. Google’s Chrome OS is proprietary, so we don’t fully know what goes on behind the scenes. You should assume anything you do on the device can be seen by Google, just to be safe. Of course, I want us all to have a sanity check: I highly doubt Google is waiting for you to log into your bank on their device so they can screenshot your balance or steal your account numbers. Don’t get overly paranoid about using the device and run yourself ragged. But at the same time, be aware that you’re giving up some privacy by using it. If you are truly concerned about the traffic issue I talked about above, then you can put the device on a separate subnet or VLAN, but again I personally don’t think that’s much of an issue.

I also encourage you to use a dedicated account on the machine. If the device was issued by a school and you have an account with the school, I think it’s safe to use that account. The school already knows the device was issued to you, and as mentioned before I don’t think they have any interest in making sure the IP address you used matches the records on your paperwork (though I would use a VPN in case of data breach). If the school did not issue you a Google account, I would make a new one.

I want it to be noted that Google has some of the best security out there. The privacy is virtually nonexistent, but the security is top notch. However, we should never get complacent. It should go without saying that all of my usual advice applies here. Strong passwords, two-factor authentication, VPNs, all are still useful here.

There are additional challenges and considerations for people attempting to lead a “Google-free,” lifestyle. At that point, it’s really an individual question. I’ve heard people consider only using the device on public networks (such as libraries and coffee shops) or using a phone hotspot. I don’t think those are bad ideas, but they can still create a pattern that Google can make use of. Of course, a pattern of using the public library every day at 2 pm is far less revealing than an IP address and what other devices are on the network in my opinion. You’ll have to make the decision for yourself on the lesser evil.

Settings

Google Chrome OS: Version 76.0.3809.136

Bluetooth: Off

Connected Evices: None

People: Don't sign in if possible, use a unique or school account if you must

Screen lock: Show lock screen when waking from sleep

Screen lock: Screen lock options: either

Autofill: All off

Device: Storage Management: Browsing Data: Advanced: Clear All

Search and Assitant: Search Engine: DuckDuckGo, Searx, or MetaGer

Search and Assitant: Google Assistant: Disabled

Privacy & Security: Disable all settings

Privacy and Security: Manage Security Key: Create PIN

Privacy and Security: Site Settings: Cookies: Keep local data only until you quit your browser: enabled

Privacy and Security: Site Settings: Cookies:Block third party cookies: enabled

Privacy and Security: Site Settings: Location: Off

Privacy and Security: Site Settings: Camera: Ask before accessing

Privacy and Security: Site Settings: Microphone: Ask before accessing

Privacy and Security: Site Settings: Motion sensors: Off

Privacy and Security: Site Settings: Notifications: Off

Privacy and Security: Site Settings: Flash: Off

Privacy and Security: Site Settings: Pop-ups and redirects: Off

Privacy and Security: Site Settings: Ads: Off

Privacy and Security: Site Settings: Unsanboxed plugin access: Off

Privacy and Security: Site Settings: Handlers: Off

Privacy and Security: Site Settings: MIDI devices: Off

Privacy and Security: Site Settings: Payment handlers: Off

Language and input: Spell check: Off

Downloads: Ask where to save each file before downloading

Downloads: Disconnect Google Drive account: enable

When returning it, Powerwash it under the “About Chrome OS” page.

Tech changes fast, so be sure to check TheNewOil.org for the latest recommendations on tools, services, settings, and more. You can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...

This week, nothing particularly explosive happened in the privacy or cybersecurity world. Governments and major service providers continued to be hit with malware, data brokers continued to swoop up people’s personal information without so much as a blink from the law, and people continued to feel as if they have no choice to but to submit the abuses of surveillance medias like Facebook and Twitter (news flash: humanity existed just fine before Facebook, you can go back to not having it. But that’s a rant for another day).

So I’ve decided this week is a great opportunity to take advantage of the calm and remind ourselves of some basic habits. In the military, troops are continually trained on basic stuff like how to handle and shoot a weapon, how to build and guard a temporary encampment, how to conduct patrols, and more. This is because any skill – when left unpracticed – gets forgotten and rusty. It’s never enough to say “oh, I learned this basic, day-one, 101-type skill, I’m good.” You have to keep coming back to it and keep it sharp, make it habit. So this week, let’s go back to some of the basic stuff and make sure we’ve got our fundamentals tight.

Security

As some of my readers probably noticed, I tend to take a more security-focused approach on my website. I view privacy as an important part of your security model, as well as a fundamental human right, but while some resources say “it’s most important that you use encrypted messaging to prevent your cell carrier from reading your messages,” I say “it’s most important to prevent identity or account theft.” So with that focus in mind, I’ll start our refresher with best security practices.

First off, any American reading this should freeze your credit. In my time promoting this to people (especially parents with children), I’ve learned that a “credit freeze” is actually a misnomer. Many people assume based on the name that freezing your credit means that nobody, not even you, can access your credit. This is disastrous for people who are trying to get out of debt, building their wealth, boost their credit scores, or otherwise still in the process of actually using their credit. However, that’s not the case. Rather, a credit freeze is like adding two-factor authentication to your credit file. Nobody can open any new accounts without the PIN they issue you upon freezing your credit, but changes can still be made such as updates to accounts, debts paid off, or changed addresses or scores. (Friendly reminder from personal experience: don’t lose the PIN they send you. It can be replaced but it’s a nightmare process.)

On the topic of two-factor authentication, literally every online account you use that offers two-factor authentication should be using it. Fortunately, in recent years, 2FA has become more widely accepted and many places offer some form of it (even if it’s only a weak, privacy-violating form such as email or SMS). Honestly, if you use two-factor correctly, you can get away with having a weak password. I don’t think you should, but you can. That’s how important it is.

Privacy

For privacy, I would argue that the most basic, important thing you can do is to look at the settings on your phone and pay attention to them. While phones are virtually impossible to make private by nature of what they do and how they work, you can dramatically reduce the amount of data that it leaks and that the apps themselves collect. You can change a variety of settings to restrict apps to only having access to the things they actually need and to collect less data by default. Additionally, you should remove any apps you don’t actually need or use regularly. Apps are the biggest attack vector for malware and other security and privacy breaches on mobile devices. My general rule is if you can wait and do it on a desktop where you have better security and more control, you should. On that note, be sure to examine the settings on your desktop machine as well.

Habits

It’s important to know that privacy and security aren’t just a bunch of apps or products you buy, they’re also habits you develop. In the classic TV show “Seinfeld,” there’s an episode where the titular character’s apartment gets robbed while he’s away because his friend Kramer had failed to close the door. When Kramer asks Seinfeld if he has insurance to cover the losses, Seinfeld’s incredulous retort to Kramer has stuck with me since childhood: “I spent my money on the Clapgo D. 29, it's the most impenetrable lock on the market today...it has only one design flaw: the door...[shuts the door] must be CLOSED.”

You can invest in all the best tools, hardware, and services but if you don’t use them correctly it’s all for naught. In the studio audio world, there’s a saying that a good recording is the result of a hundred tiny good decisions. Good privacy and security are the continual result of a bunch of tiny decisions. Just as with dieting, it’s not about running ten miles every day and eating salads. It’s about switching to diet sodas instead of regular, or passing on the fries with the burger. With privacy and security fundamentals, it’s important to make habits. Fortunately all of the stuff I listed here is pretty passive – you uninstall an app and you never think about it, or you enable 2FA and it works. But there’s other, effective basics like considering metadata, or using good internet practices. There is general agreement among the cybersecurity community that the NSA – elite, well funded, and advanced as they are – probably uses common tactics like credential stuffing or phishing more often than not to access a target’s accounts. It makes sense. Even as we near 2021, people are still falling for this stuff. So while you’re taking the time to examine your basic steps, don’t forget to check your habits and make sure that you’re not undoing all your hard work with bad habits that make the good steps you took meaningless.

Tech changes fast, so be sure to check TheNewOil.org for the latest recommendations on tools, services, settings, and more. You can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...

I have one of those day jobs where the rules regarding phone use are pretty relaxed. As long as the work gets done and you’re not overdoing it, nobody really minds if you send a text or step out to take a call for a moment. As such, at least a few times a week I’m treated to the following from one or both of my coworkers: “Hold on a second,” (answers phone), “hello?” (hangs up a moment later) “Robocall.” I, on the other hand, rarely get unwanted texts or calls. It seems everyone I know is really struggling with unwanted, annoying phone calls and texts, especially as the election looms closer here in the US (let me tell you how many times your unwarranted, North Korea-esque political texts have changed my mind about who to vote for). So in this blog post, I want to share what I’ve personally done that I think has brought my robocall/robotext amount down to near zero. Keep in mind, I said “near zero.” I still get one here and there, but it’s almost nonexistent compared to others I know.

What Not To Do

Let’s start with what not to do: please, please, please for the love of god do not download one of those stupid “caller verifier” apps. You know the ones I’m talking about: RoboKiller, Truecaller, Call Filter, etc. Do these apps work? I suppose. But they work using the network effect. In other words, these apps read your contacts list and add it to their database. They’re aiming to make a database of every phone number out there and who it belongs to so they know which ones are legit and which ones aren’t. I’ve said on my website that your carrier-issued cell number is essentially as good as a social security number these days and I stand by that statement. As such, some of the more privacy-minded people in your contacts list may not appreciate having their phone number shared to an unknown (and probably unsecured) database without their consent. I could make a lot of arguments here about metadata and relationship mapping, but it basically goes back to “don’t be a dick.” Assume that your contacts consider this information personal and don’t share it without their consent.

What To Do

If you’re like I was, you’ve probably been using the same phone number generously for years, maybe even decades. In this case, your number is already out there and while there are still good defenses (just cause you didn’t notice the leak before doesn’t mean you don’t patch it when you find it), it probably won’t do much for you. My SIM phone number that I’ve had since 2009 still gets a couple robocalls a week and a few texts per month. So you’ll need to get a new phone number if you want to truly get rid of robocalls and texts to the extent that I have. I strongly encourage the use of Voice-over-IP numbers, such as MySudo or Hushed, and I explain more about that reasoning on this page. The important thing, whether you go with VoIP or ask your carrier for a new number, is that you follow good habits regarding your new number. I’ll talk about that in a second.

One reason I like the VoIP method, and the reason it’s been so effective for me, is because Apple offers a way to shut off incoming calls to your SIM number. Go to Settings > Phone > “Notifications: Off” & “Silence Unknown Callers: On.” For Android I haven’t yet found a perfect solution. The most effective method that will probably work for most readers is to turn on “Do Not Disturb.”

Next, depending on how serious your problem is, there are a few higher-level solutions you can try. These might be a little privacy invasive to some people so they may not be ideal for the hardcore privacy enthusiast, but for the average person they’re probably acceptable solutions. The first one is to register for the National Do Not Call Registry if you live in the US. This is actually much more effective than you would think. It won’t stop scammers who are outside US legal jurisdiction, but it does stop many of the more legitimate (but still unwanted) callers. This list does need your email address, so as always I encourage the use of masked email addresses if you go this route. Second, many cell carriers are now offering programs where they help block known or suspected spam numbers. You can call your carrier and ask if they can activate this feature for you.

Privacy Isn’t Products and Services, It’s a Lifestyle

Between all these steps, you should have seriously reduced the number of robocalls coming in and bugging you throughout the day. But unless you make some behavioral changes, it won’t take long for them to come back. You can keep playing whack-a-mole with new VoIP numbers, or you can retrain yourself and never have to think about it again. As I said before, most of us are conditioned to hand out our phone numbers like candy, but this is dangerous and it quickly puts your phone number back in robocall and scam databases. I mentioned also that your phone number is basically a social security number these days, and you should start thinking of and treating it like so.

At it’s purest form, protecting your phone number is simply a matter of asking “does this person really need my phone number?” A lot of places ask for your phone number, and almost none of them actually need it. For example, if you go out to eat at a restaurant and there’s a wait, they ask for your phone number so they can text you when your table is ready. You can opt out of this and ask them to just call your name instead. When you place an order online for food, they ask for your phone number in case there’s any questions or issues with the order. They never call or have questions. Give them a fake number. Your area code plus 867-5309 is always a safe bet (it’s an 80’s pop song). If phone number is optional, don’t give them anything. Even when ordering a package online, unless it’s a vitally important package that you can’t afford to lose (such as medicine), you can safely put a fake number in the order form. Other important areas where you should use a legitimate number are things like banking, healthcare providers, and work. By being judicious about who you give your phone number to and having a backup fake number ready to go, you’ll do an excellent job of keeping your new number from populating into databases where it will be abused and used to annoy you.

Tech changes fast, so be sure to check TheNewOil.org for the latest recommendations on tools, services, settings, and more. You can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...

If you’re remotely plugged into any kind of culture at all, you’ve probably heard about the new documentary The Social Dilemma. At the time of this writing, the show has broken into the Top 10 trending in the US (I know it hit at least Number 4 but was unable to confirm it’s peak position), and holds a 90% on Rotten Tomatoes, receiving rave reviews from many critics. There’s already a variety of reviews online from top-notch sites like The New York Times, The Wall Street Journal, and even legendary film critic Roger Ebert. Even so, I felt that I could offer a unique opinion on it as someone who both lives and breathes privacy but also strives to make those topics accessible to “the average person.”

About the Director & the Film

Jeff Orlowski is an experienced documentary film maker. Some of his more well-known works include Chasing Ice and Chasing Coral, both films about the impact of climate change on the natural world. He has done work for big companies like Apple, National Geographic, and Stanford and founded his own production company aimed at producing “socially relevant” films.

The Social Dilemma premiered on Netflix on September 9, 2020. The documentary features interviews with some of the most influential names in Silicon Valley, like the creator of the Facebook “Like” button, the founder of Pinterest, and the former “Design Ethicist” at Google. These are some of the very people who worked to make social media as addictive as it currently is. The documentary mainly focuses on Facebook and Instagram, though it does briefly mention other social media platforms, and discusses the addictive nature of social media, how it got to be that way, how it works currently, and the impact that addiction and algorithmic nature has on the real world ranging from rising depression rates in teens to social and political division and violence.

The Good

Before I saw the film, the thing that most piqued my interest was the people interviewed. While the film does bring in a few privacy proponents such as Shoshana Zuboff and Jaron Lanier, it primarily focuses on the former Silicon Valley executives. I personally think it carries a lot of weight when the very creator of something publicly says “this is not what I intended and it needs to change.” That’s very different from a completely removed person saying the same thing.

I also really like that the documentary doesn’t focus on privacy or security at all. I find frequently in my discussions with non-privacy people that such subjects aren’t very interesting to them. They feel intangible, nebulous, and unconnected. The average person doesn’t feel like they are at risk of being doxxed, stalked, or targeted. But things like political division, depression, and screen addiction: these are things that many people struggle with, and in the off chance that you don’t struggle with one of these issues personally you probably know someone close to you who does. These issues hit home for almost everyone, and I think this was a fantastic approach for the documentary to take.

The Bad

Let’s start off with something everyone can agree with: the re-enactments were a bad idea. I suspect the goal of the re-enactments was to create context for the interviews, give concrete examples and visualizations of how this stuff works, and to create something that the viewers could relate to rather than a bunch of white men talking about how this wasn’t what they meant to create. Instead, I found them very “after-school PSA” in their feel, their oversimplification, and their hyperbole. I’m not sure if the issue was the writing or the re-enactments themselves, but they didn’t really help the movie.

Despite the effort to create watchable content, two of the three people I personally know who watched the movie didn’t make it through. I want to caution against using anecdotal evidence – the movie hit #4 so clearly many people did finish it – but I think that says something. Out of those three people, the one who did finish watching it was thoroughly freaked out by it and is now very concerned about her privacy and use of social media. Of the two who didn’t finish, one said that it was boring and the other said it felt like the film was repeating itself. Both made it about halfway through the film. While I realize you can’t please everyone – and while I personally disagree with both of the negative reviews I was given by the two people – it is worth noting if you’re losing your very target audience to examine why. I constantly seek feedback on my site from people because I want to know where I’m failing to communicate what I feel are important issues and reach as many people as possible and convince them.

Final Verdict

I personally greatly enjoyed the documentary and I recommend it. For people within the privacy community, there isn’t much new to learn here. For people who aren’t, some of it will be obvious, the kind of stuff we’ve suspected all along but never confirmed. But for some people, some or much of the information will be eye-opening and brand new. A lot of what is said in the movie would sound like tin-foil hat conspiracy theories coming from someone like me, but it’s not coming from me; it’s coming from the people who built the system. Are they also being paranoid? It gives the claims a new level of weight and authority. I think that alone makes it worth watching.

More on the Movie

You can visit The Social Dilemma’s official website here. It is currently viewable on Netflix.

Tech changes fast, so be sure to check TheNewOil.org for the latest recommendations on tools, services, settings, and more. You can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...

I’ve been pumping out a lot of book reviews lately. I guess I’ve just had more time to read them finally.

About the Author & the Book

Bruce Schneier is an internationally renowned cybersecurity expert. He has written over a dozen books on the topic, as well as “hundreds of articles, essays, and academic papers.” Schneier has testified before Congress, served on several government committees, made numerous appearances on TV and radio, and sits on numerous boards of various non-profits and educational societies. He has also been heavily involved in the creation of several cryptographic algorithms, the most notable being Blowfish and Twofish.

In his latest book, Schneier explores modern cybersecurity (or lack thereof). He explains why Internet of Things (IoT, or “smart devices”) security is a serious matter, the reasons that led us to where we are today (aka why modern cybersecurity blows), and offers some ideas on moving forward and changing the path.

The Good

This book is incredibly accessible. Without skimping on accuracy or details, Schneier shies away from in-depth technical analysis, instead offering a bird’s eye view of the current cybersecurity landscape. His goal is not to explain to how asymmetric keys work, but rather explain why we don’t use them to secure our fridges and toasters. This makes the book a great read for even those with the most limited technical knowledge. If you’re smart enough to understand “try turning it off and on again” – even if you don’t know why that works – you’re smart enough for this book.

I’m also a fan of people who offer solutions. I don’t believe that offering solutions is mandatory. You don’t have to know how to fix a toilet to know that it’s not working right. But I personally find it refreshing, constructive, and thought provoking to say “the toilet’s broken, here’s a few things that might fix it.” I also appreciate Schneier’s occasional reminders that he’s not trying to claim he has the answers. While his book his chock full of ideas and suggestions, he regularly reminds readers that his ideas may or may not work, and probably aren’t the only solution. He says at the beginning and periodically reiterates that his goal is to start a discussion, because we as a digitally-connected world desperately need to have one before our toasters kill everyone.

I think perhaps the best praise I can give this book is that it almost never discusses privacy. Some of my more privacy-centric readers know that getting people to care about privacy is a lot like getting a pig to care about the nutritional content of the slop you’re feeding it: people just don’t care. But cybersecurity, that’s something people care about. People are deeply concerned about identity theft, stolen bank numbers, and stalkers. This book is almost completely about that stuff (at least, on a high level), and as such it should be of interest to nearly anyone reading this.

The Bad

For one, I think Schneier relies a bit too much on government and regulation in his proposed solutions. Let me be clear: Schneier changed my views. Without being too political, I consider myself Libertarian. I consider small government with massive margins of individual freedom to be the best route, at least here in the US. But Schneier presents evidence in his book that I’m wrong, and while it’s hard to admit when you’re wrong I’m not too proud to do it. Schneier argues that government regulation on things like business, industry, and consumer protection have resulted in a lot of good that corporations would otherwise be too selfish and greedy to implement out of concern for their consumers in the past. Sorry, that was sort of wordy. In plain English: sometimes you need the government to force companies to do the right thing. Schneier has examples of this and proved me wrong, I accept that.

The reason I brought that up is this: while Schneier obviously has evidence to back up his claims and he did win me over to his line of thinking, I also think that the law is not bulletproof. Lawbreakers, by definition, do not obey the law. Whether that’s breaking into a house and stealing all the valuables, or storing customer data improperly and abusing it. While I think regulations and fines would go a long way towards fixing the current state of things, I’m a little disappointed that Schneier’s almost universal proposals are “we need a government regulation.” I think that people should take personal responsibility for their data whenever possible and that we should force these companies into compliance with things like end-to-end encryption, metadata obfuscation, and other plugins and tools I discuss on my website.

To be fair, Schneier is on board with these things. He does explicitly talk about E2EE and he does admit more than a few times that there will always be companies who break the regulations, but I still personally would’ve like to see at least a chapter or even a section about taking matters into your own hands.

Final Verdict

I whole-heartedly recommend this book. Schneier has an exhaustive list of sources in the back, but he writes in a very easy-to-grasp way. This is not a research paper for the hardcore privacy nerd, this is an introduction for everyone. Schneier says over and over in his book that his goal is to start a discussion. He repeatedly states that his ideas may not the best ideas, his goal is simply to get us all talking about ideas. This is a discussion we desperately need to have. As I sit here writing this, I have a smart phone next to me. I have a smart TV in the living room, and two PlayStation systems (3 and 4) behind me that are both network-connected. My girlfriend’s computer across the room is network connected, as is her phone. And we’re on the low end of the technologically connected. Many others I know also have home assistants, smart thermostats, and Ring doorbells. This stuff, as I’ve been saying on this site for a while now, is incredibly insecure and yet we trust it so much. This is a discussion we need to have badly, and Schneier’s book is a great introduction to get those who don’t know as much about it up to speed.

More on the Book

Click here to kill everybody, or to purchase the book. That site will also link you to Schneier’s site and blog, which I follow daily via RSS feed, and any of his other social media accounts or other works.

Tech changes fast, so be sure to check TheNewOil.org for the latest recommendations on tools, services, settings, and more. You can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...

Last week I reviewed Michael Bazzell’s Extreme Privacy book. One thing that Bazzell mentions from time to time is what he calls a “sanity check,” basically a moment to take a deep breath and a step back and ask yourself “am I overdoing it?” Let’s do one of these right now. And no, I’m not out of blog ideas, but lately I’ve been seeing people ask some (in my opinion) really paranoid questions. So let’s take a sanity check.

First off, take a breather. Go take a bubble bath, watch your favorite movie, have a beer, play some video games, read a book, do whatever it is you do to relax. There’s no wrong answer here, just some self-care. I’m a big believer in self-care. I don’t think you can be useful to anyone if you don’t take care of yourself first. If I’m neglecting myself I get snappy and moody, I make sloppy mistakes at my work, etc. So go take five (or thirty, or sixty, or whatever you need) and go take care of yourself.

Now that you’re back, let’s re-examine ourselves. Let’s start with a threat model. I have a whole page dedicated to this topic and I’m sure there’s lots of other great resources, too. The TL;DR (Too Long; Didn’t Read) version is this: “what am I protecting and from who?” We probably all want to protect things like financial information, most of us probably want to protect personal, intimate communications and media, and some of us may also have other individual aspects of our lives that we want to protect for our own reasons (maybe someone is gay or bi and not ready to “come out” yet, or maybe someone else is a conservative in a heavily liberal area and doesn’t feel comfortable saying that publicly). There are no wrong answers here.

The third step in building a threat model I think gets a little bit glossed over sometimes, even by me, but this is really where the sanity check comes into play: “What are the consequences if I fail?” Let’s be real: probably 90% (or more) of the people reading this have very little at stake. If I fail in my own privacy model, Google sends me some personalized ads. Annoying, invasive, but really not the end of the world. Worst case scenario if I fail: someone drains my bank account. That can be overturned, and while it’s annoying I’m fortunate to have a good social support system in my life – in other words, if a hacker stole all my money, I think my friends and family would help me cover rent until I got it back and repaid them. That’s a worst case scenario.

Once again, I suspect 90%+ of my readers fall into this category, and that’s totally okay. Be real about it. And as I’ve said in numerous other blog posts, I don’t think that’s a reason to be lazy. I don’t think that’s a good excuse to not use two-factor or strong passwords, or to not take the risks of your smart TV seriously. But it does mean that there’s absolutely no reason to work yourself into a paranoid frenzy over a small mistake. Don’t let this stuff negatively damage your mental health. I see people regularly posting things like “I accidentally opened my browser with my VPN off, how screwed am I?” The answer, in most cases, is “not much, really.”

When privacy and security start to negatively interfere with your life, there’s a problem. And I mean any area of your life: your job, your relationships, your mental health. One person once posted that he felt like he was going to be alone forever because of his privacy posture. Upon reading his post, he mentioned how girls online wouldn’t download a messenger that required verifying PGP keys and he has a strictly anti-DRM house, meaning no Netflix or YouTube or anything. I replied to the person pointing out the absolute insanity of what they were asking. If there’s no DRM in your house, what is a girl supposed to do when she comes over? Are you guys gonna read books together in silence? For some that might be a dream come true, but for most people that’s just not realistic. Furthermore, asking strangers to download a complex messenger and jump through hoops just to chat? I’ve lost count of how many online dates I’ve had that either ghosted me or just fizzled out. It’s a ridiculous demand. (That’s not even including the aspect of society wherein women are much more likely to be victimized, so he’s already giving off some serious “Criminal Minds” vibes with these demands to strange women online.) That’s an example of privacy gone too far.

I do want to point out that with anything, there are exceptions. Some people really can’t afford to have their IP address leaked online. Some people have stalkers – even very capable ones – and they can’t afford to have anything tied to their true home address. They can’t afford to have their picture taken and posted online. They can’t risk using an insecure communication method or a cell phone. That’s fine. I respect that. I also want to point out that some people just enjoy the challenge. I’ve jumped through some considerable hoops to do things like watch an announcement video or sign up for a giveaway. But you know what my boss would say if I had told him at my job interview that I refuse to use Gmail? “Find another job.” I refuse to let this stuff negatively impact my life. I’m not going to pass up on a job that pays well and has a great work environment just because it means I have to use Google Suite on the clock. (I just don’t use it on any of my personal devices, but that’s a rant for another time). You shouldn’t either. I explained to the guy above that I never have any expectation of any of my dates using Signal or any other messenger, but I do make it known on the first date that I’m a privacy nerd and if things work out I’d like for her to eventually use one. In the meantime, I use a VoIP number dedicated to dating.

So take a sanity check. Ask yourself realistically what’s the worst that could happen if you mess up. Sometimes there are real threats and that’s okay but a lot of the time there aren’t. Notice I said “realistically.” The worst that could happen if I mess up my personal privacy model is that some stalker finds me and ax murders my entire family. Is that possible? Sure. Is it likely? No, not really. The worst I’ve gotten in the privacy community is someone calling me a shill every few months. I don’t think anyone has enough of a grudge against me to go that far. The realistic risk of that – for me – is extremely low. Maybe for you that is a risk. But for most of my readers, I doubt it. So stop freaking out and having a complete meltdown when you make a small mistake. Take a breath, learn from it, and do better next time. And if you’re seriously that paranoid when your realistic risk is quite low, then maybe see a therapist. There’s no shame in that. Don’t let this stuff negatively impact your life. I believe we live in a post-scarcity world, meaning I believe there is enough for everyone. If privacy and security are stressing you to the point of hurting your quality of life, that’s a problem. Make sure you take some time periodically to do a sanity check and ensure you aren’t harming yourself, no matter how deep you go. As long as you’re enjoying it and it’s not causing problems, go as hard in the paint as you want. But always keep perspective.

Tech changes fast, so be sure to check TheNewOil.org for the latest recommendations on tools, services, settings, and more. You can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...

I’m going to lay my cards on the table and admit to my bias: I really like Bazzell and I hold him in fairly high regard. Bazzel was the person who introduced me to privacy, and I “cut my teeth” on his work. His podcast and his earlier (now out of print) “The Complete Privacy & Security Desk Reference: Vol 1” introduced me to why privacy matters, how to go about reclaiming your privacy, the concept that there are levels to privacy, and even the idea that not every level is right for everyone. I was sad when I missed Volume 2 of the desk reference, so I was pretty excited when this new revamped book came out. Fortunately by the time I had committed to buying it, the second edition was on the horizon, so I just went ahead and waited.

Well last night I finished the book, and so in keeping with the vision of my site, I’ve decided to go ahead and share my thoughts.

About the Author & the Book

Michael Bazzell describes his his credentials in relatively vague terms (which makes sense given his work), but they are impressive nonetheless. He claims a long career in law enforcement, including cyber crimes. After retiring from the force, he became a privacy consultant and even worked on Season 1 of the acclaimed TV show Mr Robot. His work on that show catapulted him into celebrity circles and he now spends his time as a full-time consultant helping people disappear from stalkers, hackers, doxxers, and more. He also conducts live training and speaks at events.

Extreme Privacy is Bazzell’s latest (relatively) comprehensive collection of his own knowledge and experience. The book takes readers through Bazzell’s process that he would go through upon being contacted by a client who needs a “full reboot.” In other words, pretend someone needed to completely disappear from a very advanced enemy who has resources to spare, and now pretend you’re along for the ride. The book is not about basic cybersecurity or good social media habits, although it does cover those topics.

The Good

This book is incredibly thorough. I can’t state that enough. The book clocks in at just over 550 pages, and every page is jammed with ideas, strategies, instructions, and examples. There’s no fluff or padding to speak of. It also covers situations that, in my opinion, one wouldn’t normally think of. For example, there’s an entire chapter about pet adoption.

I also found the book to be pretty easy to grasp in most situations. Bazzell talks as if he’s having a discussion with another privacy enthusiast. He knows his audience. He doesn’t dumb things down as if talking to grandma, but at the same time he doesn’t get lost in the super technical details as if hew was talking to a programmer. He keeps things – for the most part – at a pretty average level where a typical competent computer user can grasp what he’s talking about.

Another thing I appreciated about the book – and Bazzell in general – is his consistent “sanity checks.” Basically, every so often, especially when he just finished outlining a particularly extreme strategy, Bazzell will make a point of saying that this is an extreme idea and may not be applicable to everyone. He encourages his readers to consider their own unique situation and whether the work involved in each strategy is worth the payoff. He also warns that some of his strategies may have unintended negative consequences and reminds readers that not everything is right for everyone. As someone who shares that sentiment – that there is no “one size fits all,” – I really appreciate that approach.

The Bad

The book can sometimes be a little bit too thorough. While I appreciate Bazzell’s desire to leave no stone unturned, I felt my eyes glaze over on a lot of parts where he gives example legal documents or describes step-by-step installation instructions or occasionally repeats himself (again, purposely in a desire to be thorough). By the end of the book, I found myself skipping certain parts or skimming them. For example, he’s got several pages about how to install and configure a PfSense firewall. PfSense is not my firewall of choice (although there’s nothing wrong it), so I skimmed those pages. If I ever do decide to use PfSense, I can always go back and check his instructions again for a detailed walkthrough.

It also sometimes feels to me as if the thoroughness is a bit disproportionate. For example, when discussing how to get a car anonymously, Bazzell walks through several scenarios and often repeats himself to be thorough. However, early in the book, he decides that his example state of residence is going to be South Dakota, despite Texas (and I think Florida) also meeting his requirements. He does not offer the same thoroughness if you decided to use one of those states for residency. To his defense though, the steps and laws are always subject to change quickly, so even using South Dakota one should make sure to consult current information and not rely solely on his book.

Finally, while his book does occasionally mention money as a factor, it is clearly aimed at people who have relatively large amounts of disposable income. For example, when talking about a home network and setting up a VPN and firewall on the router, he instantly zeroes in on the Protectli firewall, a solution that starts at $150 on the low end (not bad) but can quickly max out at $1600. That can be a high price tag for some people. (The more reasonable packages land in the mid hundreds, but still.) He does mention that you can opt for a different router and flash the firmware yourself, but he offers very little explanation of which firmware he suggests or what to look for in a home router, leaving the reader to wonder if there’s any less expensive options out there and which ones. This is, honestly, kind of nitpicking but it does seem like a bit of an oversight for such a thorough book, and it’s pretty clear from reading it that he’s used to working with clients who have, at the very least, a relatively high budget.

Final Verdict

I would consider this book a must-read for anyone who’s interested in privacy beyond the average “I don’t want my ex cyberstalking my Facebook.” This book is deep, but it’s designed for people who need their privacy. Police, government employees, people who are concerned they might have or someday get a stalker, people who have controversial jobs or opinions and want to keep their families safe. I hesitate to call this book a must-read for everyone because it is so in-depth and over-the-top, but if you are interested in privacy I think it would be good to have on hand. You can always ignore the parts that don’t apply to you, or come back to them later. Personally I put about a dozen sticky notes on various pages that contained information I knew I would almost certainly come back to at a later date for various reasons.

More on the Book

You can purchase the book here. Bazzell also has a blog and a podcast, as well as live events and additional books. You can find all of them on the website I linked.

Tech changes fast, so be sure to check TheNewOil.org for the latest recommendations on tools, services, settings, and more. You can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...

How insatiable curiosity created an immutable treasure trove of data privacy nightmares, and you've paid for it

While chatting with another privacy enthusiast on the web lately, the topic of DNA testing came up. This person pointed out how very little information exists in a consolidated, easy-to-understand format online for privacy enthusiasts and how this person had to go do their own research so they could discuss the matter with their own family. Given both my interest in true crime and the fact that my own family has performed these tests in the past (though not me personally), I suddenly realized how surprised I was that I had never before tackled this subject. So, with the help of 21x this week I'm going to attempt to dig into this subject.

DNA Testing

I'm sure that if you're reading this, you're familiar with DNA testing, but just in case you're not or you need a refresher, it presents most often in the form of services like 23AndMe or Ancestry.com who offer an inexpensive, at-home DNA collection service (usually something along the lines of spitting in a tube and mailing it in) and in return you get told about your ancestry such as what countries your ancestors may have come from. Some services even offer to identify potential long-lost family members and help put you in touch.

Don't get me wrong, DNA testing has some incredible promise. Full disclosure: Alzheimer's runs in my family and that keeps me up at night. As a child I watched my grandfather deteriorate into a helpless heap who could remember nothing, do nothing. He drooled on himself and got fed by the nurses. I am all in favor of technology that will help me avoid that fate, and I would love to know if I'm at an elevated risk so I can get treatment early. Additionally, I am in favor of using this same technology to help identify victims, find criminals, and help families get closure and justice. But we have to realize that, as with all modern technology, this is a double-edged sword. We can't let mainstream articles of rainbows and butterflies lull us into a false sense of security by painting utopian pictures of early cancer detection and crime prevention.

Not to mention that the reach of consumer grade DNA testing can be incredibly narrow. A person has about three billion base pairs that would need to be analysed to get the full picture of individual's genome; this cannot be done for $100. Instead, your average at-home DNA test sequences only between half a million and one million base pairs. Plenty to identify you, or offer some very limited health insights while at the same time not really giving you a full picture of your genome which would be useful to your doctor.

The Risks

DNA testing carries great benefits, and with it great potential for abuse. For example, in my own life, what happens if I get tested and a health insurance company declines coverage because I'm at risk for Alzheimers? What if they raise my premiums to an unrealistic level? Many countries are still engaged in aggressive, overt racial discrimination, most notably China with their treatment of Uighar Muslims. Imagine how DNA mapping could be used to refine this process. People who would normally not be at risk – maybe people who have left the ethnic community or don't look like they could belong to the said community – would suddenly be proven to have roots and now be targeted. Imagine how this technology could be use to discriminate against transgender people.

The Problems

I think that genetic privacy is not such a widely covered topic because it has no easy technological solutions. As much as we say that privacy is a human right – and it is – so is the right to waive that privacy if you want. I have the right to not put my entire life on display via Facebook or YouTube, but I also have the right to do that if I want. While long term solutions to privacy concerns will always be fundamentally economic and political, there is some comfort in the fact that if you don't want your phone provider reading your text messages, there's something concrete you can do to implement effective controls. DNA is not so black and white. I can strong-arm companies into respecting my privacy by simply opting out, by using encrypted communication and not using their services in some cases. With DNA, I don't have that option. I have to give up some of that privacy in order to get a medical test. In fact, most newborns are tested within minutes of birth for any major problems because many problems can be fixed or treated if caught right away. But rarely discussed is what happens to that blood sample afterwards. Some states require the sample to be destroyed at the request of the parents, but parents often don't know they have that right, or even that the sample is kept. The same can be kept for decades and is often sold or used in research. In time it could even be upcycled into criminal databases for faster criminal identification.

A bigger problem that rarely gets discussed, I think, is the fact that DNA is not one-to-one. In other words, think of Joseph DeAngelo, the Golden State Killer. DeAngelo evaded detection for decades – his last crime was in 1986 – and he was only captured in 2018 after members of his family submitted a DNA test. Family member DNA is so similar to each other that it got flagged and made police investigate the family more closely, at which point they were able to narrow it down and positively identify DeAngelo as their suspect. He would later confess. In this context, DNA is essentially like metadata: if you have enough of it, you don't need the content. If enough of my family members submit DNA tests, my DNA is virtually unneeded. The picture is complete enough to paint in the missing pieces. Or, have you ever considered how the the 4th amendment rights interact with DNA? The law enforcement compared DeAngelo's DNA with the data found on the public DNA sharing website called GEDmatch. As the DNA the Golden State Killer shares with his family is also his, does that fall under the idea that an individual should not be subject to unreasonable searches? Is shared DNA considered to be part of 'persons, houses, papers, and effects'? Even if you fall on the side of 'they do not', it is inarguable that these issues should be tackled through the democratic legislative process.

Ultimately, where do my rights begin and my family members' rights end? If my mother chooses to get a 23AndMe test, that's her right. But what about my right to not have my DNA obtained by third party researchers? Or insurance companies? Police? Private individuals can already buy your geolocation from your phone provider, should they be able to buy your DNA?

This all factors back into the classic “nothing to hide” argument, the idea that if I'm not doing anything wrong I shouldn't be worried about putting my life on display, but the problem is so much deeper. I don't mind that DeAngelo got caught. In fact, I'm sad it didn't happen sooner. The man was a monster and he got to live a long and privileged life. Catching him at this point is a formality. There was a time where IMSI catchers were only used by highest levels of law enforcement, now every police department has one they got off eBay.

We can pass laws requiring health companies not to discriminate based on DNA tests, or requiring research companies to get consent and disclose how the DNA is used, but how often are companies caught violating these laws? The fines are always laughably pathetic, often less than 1% of a company's annual revenue, even to the point where many privacy invading companies simply see this as a cost of doing business. That shouldn't stop us from passing these laws, but clearly we can't rely on them solely as a solution.

The Solutions?

So what is the solution? We don't know, and that's one reason this topic is so rarely tackled by privacy advocates. I can't stop my family from taking a DNA test. I can ask them not to and explain why, but I can't force them. And honestly, I didn't even know my entire family had done them until years after the fact. Some 26 million Americans have taken an ancestry tests so far, and one estimate says that if the growth trend holds, 100 million Americans will have had the test done within next 10 years. If you consider that sometimes even a distant cousin's DNA can reveal meaningful information about you, 100 million Americans being on file is essentially covering the entire country. And, not to sound like a broken record, but that's great for utopian reasons: catching bad guys, catching diseases while still treatable, and even curing some of them. But if surveillance capitalism has taught us one thing, it's that abuse in the name of profit will always be inevitable. It won't be long before the same data used to cure some disease will be used to disqualify health insurance applicants for the incurable ones. Or that someone will try to argue that your DNA makes you more prone to being a criminal, bringing back ghosts of social Darwinist policies we thought we had left in the cinders of the Second World War. Or declare you unfit for some type of job. The discrimination is real, and it will happen. Just this month Toyota announced their intention to track driver data and sell it to insurance brokers. But you can change your car, you can adjust your driving pattern, you can chose not to drive Toyota. DNA is immutable. This stuff happens, it's not as tin-foil-hat as it sounds.

Next steps

So, what can we actually do? My suggestions are as follows:

  1. Speak to, at least, your immediate family and let the know how you feel about issues of genetic privacy. Be on the lookout for ancestry-related conversations or DNA testing commercials and voice your concerns where appropriate. What's more, educate them on these issues. Genetics are complicated, and the pull of insight into our ancestry is strong. Make sure that if they do make these choices on your behalf, they cannot claim ignorance as to gravity of their decisions.

  2. Take steps to minimize what is already out there. Most DNA testing companies allow the customer to request destruction of the existing biological sample. Remember, only a tiny sliver of your DNA is routinely tested, do not allow the technology to advance enough where your already sent sample can be re-tested to violate your privacy further. If your child, or even yourself, were tested as a baby, inquire into what happened to those samples and the results. Do they sit in some storage room somewhere? Inquire if you can request to have them destroyed.

  3. Urge the user to delete the existing DNA service account and data. If you are subject to a jurisdiction with strong privacy laws, such as California or the EU, use these laws to compel companies into destroying your data, if the person who had the test done agrees. If they do not, voice your concerns.

  4. Support organizations which support these causes and champion these issues. I know, pickings are slim in terms of political representation on these issues, but do not let your silence be taken as complicity.

Tech changes fast, so be sure to check TheNewOil.org for the latest recommendations on tools, services, settings, and more. You can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...

This week I was reintroduced to a phrase I’m coming to love as I interact with the privacy community: “don’t let perfection be the enemy of good enough.” If you recommend Signal to someone for it’s security, someone else will complain that Signal uses phone numbers and AWS as a backbone. If you recommend Session, someone will complain that it’s based in Australia – a Five Eyes country. If you recommend Matrix, someone will complain about the metadata collected, and if you recommend XMPP someone will complain about, well, something I’m sure.

About a month ago, I went off on someone because they tried to argue that MySudo is a “joke.” I readily agreed with them that MySudo is not open source and is not end-to-end encrypted (unless talking to other MySudo users), and therefore I wouldn’t recommend it for seriously sensitive communication, but I proudly promote the app as a way to have a wide array of both VoIP numbers and capabilities (such as email and SMS) readily available for a low price. This is great for not using your SIM number and for compartmentalizing your life. The person replied to me by saying that a better solution is to buy multiple phones in cash with multiple SIMs and use them as needed. I quickly pointed out that this solution is ridiculous because 1) it’s expensive, 2) it’s not user friendly, and 3) by turning the phones on and off as needed, you’re already creating a pattern that can be tracked back to you.

The fact is, anything can be hacked or traced. You can ask literally any security expert out there and they’ll agree with me. If you cause enough trouble, someone with enough resources will find you. It’s only a matter of time. This is one of the reasons I repeat over and over on my site that I’m not trying to teach you how to do illegal things. That’s not just a disclaimer to cover my butt, it’s because you will get caught. The goal of privacy and security – for the average person – is to find the right balance between protection and convenience. I mentioned in another blog post that if you make your security defense too difficult, you’ll simply never use it, so you have to find the balance between solutions that aren’t ideal but will be used against no solutions and solutions that are so hardcore you’ll never use them, thereby defeating the purpose.

Which brings us back to my first paragraph. The fact of the matter is, no solution is ideal. ProtonMail explicitly says on their website that if you’re leaking Snowden-level secrets, you probably shouldn’t be using email at all (he certainly didn’t). If you’re planning a revolution, you probably shouldn’t be using Signal even if it does have top-level security. You should be getting together in person. Anyone who claims to be perfectly secure or anonymous is – point blank – full of sh*t and you should run from them like Jason Voorhees. You shouldn’t rely on these electronic means which will someday become insecure, and for all we know might be already. State technology tends to be roughly a decade ahead of the public sector, so you should assume that the government can read everything you do.

For most of us, that’s okay. For most of us, the government is not interested in our selfies, bad puns, dinner plans, Starbucks orders, and the fact that we’re running fifteen minutes late. However, the fact that we can’t have perfect communication doesn’t mean we should throw the baby out with the bath water. “Signal requires a phone number and is based in the US and uses Amazon for infrastructure.” Those are all perfectly valid complaints depending on your threat model and what you’re communicating. When my partner gives me her debit card and asks me to pick up her medication at the pharmacy while I’m out, I would rather use than Signal than SMS to ask what the PIN is or to verify that I’m not missing any the medications that are ready. Just because Signal isn’t perfect doesn’t mean that I don’t use it. I wish she would use something a little more decentralized like Matrix or XMPP, but I’m not going to let perfection be the enemy of good enough. For us, for that situation, Signal is good enough.

Of course, it goes without saying that we also shouldn’t let good enough be the enemy of great. Many people fail at their dreams in life not because they fail, but because they say “eh, good enough” without striving for more. Someone who wants a penthouse gets a corner office and says “good enough. I have it better than most and I should just be grateful.” We should be grateful that we have hyper-secure options like Signal, decentralized options like XMPP, free options like Matrix, or metadata-resistant options like Session.

But we shouldn’t stop there. We should demand better. As with privacy and security itself, it’s a fine line. We shouldn’t forgo the pursuit of perfection because these products are good enough, but at the same time we should respect that these products do give us a huge service, often at little or no cost to us, and often at a massive labor and cost to the developers, who can range from a single person in their bedroom to a medium-sized company struggling to keep the lights on. We should also respect that different companies make different products aimed at different people. For example, ProtonMail started with the vision of making encrypted email easily accessible to the masses. They admit that they are not perfect because perfection would run counter to that mission – that is, it would make encryption not user-friendly and therefore not easily accessible to the masses. Just as my own site often chooses not to post certain information because it falls outside my target audience, many popular services are popular for a reason: they’re choosing to make the trade-off between security and convenience. It’s better to give a lot of people a moderate level of protection than to give a few people hardcore protection and alienate everyone else. At least, I think so.

I want to end by saying that you are always welcome to go the extra mile. I encourage “normies” to use various programs and settings to lock down the telemetry on their computers and give themselves a little more privacy and security, whether that’s Windows or Mac. But I don’t see that as a reason that I shouldn’t use Linux. Just because other people are content with “good enough” doesn’t mean that you aren’t allowed to go the extra mile. And yes, people should be making that decision with education and awareness, knowing the risks and benefits. But the answer there, in my opinion, is not to force people to use difficult solutions, but rather to educate them on why those difficult solutions are better. Forcing someone to do something they don’t want to for their own good will only lead to resistance and eventually abandonment of the proposed solutions, but education will lead to good decisions being made willingly and stuck with. At least, that’s my two cents.

Regardless, please stop letting perfect be the enemy of adequate, and remember that not everyone has the same threat model. Respect each other.

Tech changes fast, so be sure to check TheNewOil.org for the latest recommendations on tools, services, settings, and more. You can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...

Enter your email to subscribe to updates.