Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A Court Order is an Insider Attack (freedom-to-tinker.com)
146 points by Amadou on Oct 15, 2013 | hide | past | favorite | 47 comments


This article begins by giving away the moral high-ground.

"They ask: If court orders are legitimate, why should we allow engineers to design services that protect users against court-ordered access?"

Why on earth would you allow that a court order is legitimate? Both the tactic and the execution are of questionable legality, and critically that legality is flexible.

Remember that the clearly illegal complicit acts of telecommunication companies were made retroactively legal through a grant of immunity. The idea that you would begin your article by allowing the emperor to retain the assumption of clothes undermines the credibility of your remaining contentions. Courts having the right to demand things from citizens that they do not wish to divulge is not some force of nature, it has not always existed, does not exist in all systems, and need not exist in the way it currently does.

So cut the crap trying to justify the moral, conscientious, and brave action of a small company and force them to justify their existence.


I submit that you have rather egregiously misread TFA. The antecedent of "They" in this case is the judge and other law enforcement fans. Felten's point is not to stipulate to their mistaken assumption, but to contradict it. That is, he shows how the threat represented by the court system is indistinguishable from that of various other bad actors. He doesn't spare Lavabit from critique either, by pointing out how better systems design would have better resisted the "insider" attack carried out using the courts.


A man with a crowbar will beat you for your password even if you assure him you can't divulge it. The problem is with the man with the crowbar, not whether or not you use a secure system.


A man with a crowbar won't continue to beat you for your password if you can prove that you can't divulge it, unless he is beyond the reach of logic. At that point he might as well be beating you because he doesn't like the color of the moon, and you would best get together with some of your friends and divest the lunatic of his crowbar.


> and you would best get together with some of your friends and divest the lunatic of his crowbar.

And thus government was born.


> And thus government was born.

And then government becomes the lunatic with a crowbar and here we are.



But there is a logical reason to beat you for the password you can't divulge, namely to serve as an example and deterrent to any future persons who might attempt to restrict access via the same mechanism.


It won't convince the crowbar enthusiasts (who support every abuse of citizens by state agents), but the existence of more secure email providers will eventually lead reasonable people to question why the regulation of email provision requires so many crowbars.


What about deniable encryption, where you can give them a key that decrypts the ciphertext to innocuous stuff while concealing the real key that decrypts to the real plaintext? What does the crowbar do for the attacker then?


Continues beating until it decrypts into what he wants it to decrypt.


"Perpetrators often attempt to justify their acts of torture and ill-treatment by the need to gather information. Such conceptualisations obscure the purpose of torture... The aim of torture is to dehumanise the victim, break his/her will, and at the same time set horrific examples for those who come in contact with the victim. In this way, torture can break or damage the will and coherence of entire communities." -- Physicians for Human Rights


> Why on earth would you allow that a court order is legitimate?

You won't get very far when arguing with judges & lawyers if you deny that, and they are a target audience.

The point that requiring law enforcement access necessarily weakens a system's security from malicious actors is an important point that our legal system needs to come to terms with, because the courts need to balance the public's legitimate need for security against the wishes of law enforcement, because the most likely alternative is for them to ignore our needs in favor of investigators' wishes.


I don't think the author gives away the moral high-ground. I think you're misinterpreting his sentence. I interpreted it as if the word "even" was at the beginning, as in:

Even if court orders are legitimate, why should we allow engineers to design services that protect users against court-ordered access?

The point being, whether or not court orders are legitimate, it's perfectly reasonable to engineer systems to protect against insider attacks.


Is everything a political litmus test? This is a great article making a logical rational argument for why, even if we were to grant that a court order is legitimate, we should still build systems that make them hard to support.

This is an article about designing systems to be secure, but sadly the discussion descends into an echo chamber of "boo bad government, stop enabling them".


Just because a HN blog post complained about this supposed "echo chamber" yesterday doesn't mean we all have to stop talking about it. The role of government is a complicated, unsolved issue and it happens to be something that lot of people in this community care about. I think that's a good thing.


Exclusively addressing one point of an argument does not magically concede all other points.


did you even read the article? the whole point is that sites like lavabit should exist, and that more data security is better


Isn't this the concept of "host proof hosting"? The idea is that the encryption secret is never even transmitted to the host so they have no way of decrypting it.

Why wasn't Lavabit setup in a similar fashion? Why isn't this more widely practiced?


It's not widely practiced because it makes for a poor user experience in most cases - what happens when a user forgets their secret? Either they can never access their email account again, or at least they can never read their old emails. Besides that, you also lose the ability to do much of anything on the server - so super-fast search ala gmail is out.

Most users will choose a provider that has good protection against illegal attacks (like crackers), and little protection against legal attacks (like warrants) because those are the threats they see themselves facing.


> It's not widely practiced because it makes for a poor user experience in most cases

Skype seemed to work just fine that way until it was captured and neutered.


Skype does not provide the same feature set as Gmail account with to searchable archives


" … because those are the threats they see themselves facing."

Or perhaps "because those were the threats they saw themselves facing before Snowden's revelations"


This is what TFA means when it says:

In the end, what led to Lavabit's shutdown was not that the company's technology was too resistant to insider attacks, but that it wasn't resistant.


Lavabit was not set up like that because Lavabit could not be set up like that. You cannot have webmail that works like that; at best, you have a client that is delivered via the web, but then why not just run the client locally?

Basically, if you want encryption to be local, then you do not want an encryption service. Which is exactly what you should be doing anyway; privacy and security are not services.


There are two huge differences that make court orders completely different from inside attacks:

1. Court orders can be freely targeted.

It's incredibly hard and costly to make a system resistant to inside attacks from everyone. Not just costly from a technical implementation perspective, but from a business operations perspective. For example, software engineers might occasionally want to look at some user data in order to diagnose a bug. Not having access to the data would make their lives much harder. Certain analytics might not be able to be generated which leaves the business flying blind.

Instead, an acceptable tradeoff is that access is restricted and managed to mitigate risk. For example, access is only granted when necessary and sensitive operations might require two separate people to sign off. This makes it significantly more difficult for a malicious actor to bribe the right people but makes it no more difficult for law enforcement. Law enforcement can legally compel bypasses around all the safeguards.

2. Court Orders don't care about being detected.

Instead of making it technically impossible, it's often far more effective to deter inside attacks through robust detection. Audit logs, clear policies and dire consequences are usually enough to shift the calculus of inside attacks into "not being worth it". Such a calculus does not apply to court orders because they don't care about being detected, because they're not doing anything "wrong".

On the surface, court orders and inside attacks might seem very similar technically viewed from an overall business perspective, they are vastly different and the comparison between the two is unhelpful.


Bah. 1. Law enforcement cannot compel a number to reveal its prime factors, or people beyond its jurisdiction to reveal secrets. 2. The court order that started all this did care about being detected: It demanded access such that Lavabit could not learn whose mail was being read.


Designing a system to resist law enforcement is not the only way to make a system that resists insider attacks. In fact, it's a terrible way. Banks have figured out how to comply with the law, allowing law enforcement to seize bank assets, without letting employees abscond with deposits; it's not difficult. The two problems are not the same.


That's not true. The financial system does not primarily rest on technical safeguards. It rests on legal safeguards and risk management (i.e. it puts the onus to strike the balance between security and convenience on the company in the best position to decide what's a legitimate request).

If I have read-only access to your bank, or to Mint, or if I can steal your mail, and I've ever received a check from you, I can set up an ACH relationship with my broker. I could probably even pull money out of your account. The transfer would be quickly reversed, and I'd be easily caught.


>Banks have figured out how to comply with the law, allowing law enforcement to seize bank assets, without letting employees abscond with deposits; it's not difficult.

Are you sure? It seems that all banks have done is to be able to resist attacks from adversaries up to a certain size. Typical criminals are smaller than this, typical governments are larger. But when the reverse is true the banks fold to the criminals (as in various high-corruption countries), or the government folds to the banks and let them out of well-deserved hot water when the banks are the ones thieving and cheating.

I don't like either of those things if I'm a depositor who is trying not to have deposits stolen by criminals, bankers or corrupt governments.


Insider attacks are definitely not a solved problem for banks, insider attacks or cooperation with insiders are a major part of realized fraud losses.

For banks (unlike data companies) the most effective anti-fraud tools are actually not about prevention, but detection and mitigation. Well, also insurance and prosecution.


Just because one can be done without the other does not mean that doing one without the other is preferable to doing both.


    Had Lavabit had in place measures to prevent disclosure of 
    its master key, it would have been unable to comply with 
    the ultimate court order
Is this possible? Can you have a system that allows you access, but doesn't allow you to give access to others? Or is it possible to make a system that not even you have access to and still be maintainable?

Edit: Upon thinking about this further, couldn't a solution to the byzantine general's problem like how bitcoin works solve this?

e.g. Thinking outside the box here, what if every person who uses the mail system has to collectively solve some hashing problem based on the source code or system change where the solution allows the software patch or upgrade to be applied. If 50% of more of the users solve the hashing problem after inspecting the code, the patch would be applied.


Well actually it is not that hard. The key is in escrow in another jurisdiction with dead man's switch and store it only in ram locally. When the legal order comes you just shut down the service and wait until the key is destroyed.

You can also rig the machine on which the key is used with something nasty that will melt it inside against physical tampering - lets call it SWAT raid protection.

Disclaimer - IANAL


I'm still convinced that triggering such a dead man's switch would be tantamount to destruction of evidence. Courts aren't stupid.


I do have one problem with part of this article.

> From a purely technological standpoint, these two scenarios are exactly the same [...] Neither of these differences is visible to the company’s technology - it can’t read the employee’s mind to learn the motivation[...]. Technical measures that prevent one access scenario will unavoidably prevent the other one.

Emphasis on the last sentence - since this is only due to implementation in the chosen example.

As a counterpoint example, a system that allows for user data access only after a request has been made to access that data, the request is recorded in a request log system of some sort, and approval for the request goes through the appropriate checks (legal and procedurally) at which point it's signed off on and data access can occur.

(The counter-counter-argument is that technology isn't perfect and someone with the right access could potentially get around it ... but enterprise key management is a real thing, folks)

In this sort of system, the "intent of the employee" piece is encoded in the checks/approval piece as long as you make sure the same employee making the request is not the one with approval rights and that legal representation gets included in the loop for these types of accesses.

In this situation the hypothetical criminal syndicate would have to mount a larger and larger attack involving more people and greatly reducing the chance of it happening.

A government, however, would just pile on the legal requests and increase the number of employees involved until the request could be potentially be satisfied. By doing it this way, you make it unlikely for the government to il/legally pressure a single individual and instead involve your company's legal representation and a larger portion of the government's legal apparatus in determining if the request is valid - and in the meantime create some sort of documentation about the event (even if you can't publish / talk about the documentation while you're going through the courts).

The only advantage in defensive design where you literally cannot access your customer's information is that it absolves you of knowledge of what any one specific customer is doing. However, you increase your risk exposure to your services being used for illicit purposes (as defined by whoever is bringing a lawsuit against you), potentially being shut down, and potentially losing money as a result.

Some companies are ok with accepting that cost (in return for something that you can't put a price on) - most aren't.

There is a big difference between no employee can access the data and no single employee can access the data.


While not arguing that your counter-example is possible – I strongly reject the implication that businesses therefore are required to build that extra infrastructure, on the off-chance that some legal request might at some point in the future ask for sensitive customer data.

If a business chooses not to develop that, because they've decided that disallowing insider access is a better choice for their business, that's an entirely sensible and legal choice to make. If a court later decides they're not happy with that - the court must be the one who bears the cost of implementing a properly auditable and secured system, not the business who has no need for it. (And I note that Levison offered to build such a system for only $3500, which seems to be to be a _very_ reasonable price, and instead the government chose to play hardball. And to Levisons great credit, lost.)

Note too, that there are many instances where not having customer data available is exactly the right way to build things. Any court in the land can come and ask me to tell them which credit card got used at any of my client's ecommerce sites – I'll say "Sorry, never even saw the CC number. Absolutely no way I can disclose that to you. If you want me to implement a system that allows me to gather that data for you, speak to my bank and Visa/Mastercard/AMEX et al, and provide me with written court-backed absolvement from my PCI obligations, and I'll have my people prepare you a quote for costs on modifying the software.


"the court must be the one who bears the cost of implementing a properly auditable and secured system, not the business who has no need for it. "

The court is essentially an arbiter. It is on no one's side, and actually has no real skin in the game.

This means it is not going to pay for this, it is going to decide which party pays for it :)

If the parties fail to pay, or fail to comply, they will be possibly sent to jail Note that Levison has not really won yet, since he still has a contempt hearing coming up ...


A couple of somewhat contradictory points:

(1) Procedures are only as good as the people who follow them. Somebody has to actually access the data - that guy is the insider you have to worry about bypassing the procedures.

Maybe I am just not imaginative enough, but I can't think of a scenario that is completely immune to the single insider. The best I can come up with would be key-splitting such that everybody with a piece of the key would need to agree that it is a valid access request. But even then you have to worry about the process for generating the key before it is split. Even if everybody is in the room when it is generated and split, you have to worry about whether the computer doing the generation wasn't compromised by the insider to surreptitiously make a copy of the whole key.

(2) The idea of creating a system for data access presupposes that any developer must cater to the potential desires of law enforcement and make the effort ($$$) to accommodate them before they've even issued a valid court order. CALEA has put that burden on some telecom operators, but on the other hand the Clipper Chip with its key escrow system was an attempt to manipulate the market into building such access into all encrypted comms and Congress didn't even come close to passing that for government use and so the free market didn't even try.

CALEA: https://en.wikipedia.org/wiki/Calea Clipper: https://en.wikipedia.org/wiki/Clipper_chip

EDITED TO ADD:

(3) There is also the trade-off between security and cost. We all know there is no such thing as perfect security, only an increasing level of cost to circumvent or penetrate. As you mention, a procedural system gets more expensive to compromise the larger the number of people necessary to grant access. But at what point does the cost to bribe or otherwise co-opt all those people equal or exceed the cost to crack the encryption? Just a SWAG, but lets say a well-implemented encryption system takes $100M to crack, how many people can you compromise for the same amount of money? What if $25M is enough to buy the entire company outright?

A system that is more expensive to crack because the only means of access is through good crypto is a more valuable service than one with an access procedure involving humans. Maybe that theoretical drug cartel can afford $25M but they can't afford $100M. So a user of a pure-crypto system would be safe from the cartel but one with a process for law-enforcement access would not.


Another possibility is to split the keys and include external keyholders, like famous security researchers and lawyers. And give everybody 2 keypairs. One keypair slowly corrupts the database while the other is legit.


Really interesting idea of having two keypairs, including one that corrupts the data. Are there any actual examples of such a system?


No, because it assumes an adversary too dumb to make a backup first. And the penalty for getting caught providing the wrong key is, uh, not good.


>As a counterpoint example, a system that allows for user data access only after a request has been made to access that data, the request is recorded in a request log system of some sort, and approval for the request goes through the appropriate checks (legal and procedurally) at which point it's signed off on and data access can occur.

The problem with this sort of system is that while you then don't have to trust any single individual, you still have to trust the organization. Shady Email Servers, Inc. can promise you all the checks and balances they want, but if the two people who have to be in agreement to betray you are both in the Mafia then it's all smoke and mirrors.

With peer reviewed cryptographic systems you only have to trust the math.


That's why companies like Google and Microsoft should provide as much end to end encryption for their users as possible. So the government doesn't even bother asking them about the data.


Well, the government can ask them not to do end to end encryption in the first place; and it's likely that MS values their desire above yours.


Nonsense. A Court Order is a Legal Requirement. Same as having your company registered somewhere, or having all your invoices sorted out. If you want to do business, you have to respect local laws.


Since fending off legal hackers and inside jobbers is too difficult...

What is the current state of the art on homomorphic encryption? Does it still cost an 'ARM and a leg' of CPU cycles?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: