Use #Signal? Despite the fact that there any *many* good reasons for anyone with important secrets to protect
*not* to do that (US-based, no warrant canary etc), and Moxie has defended aspects of his centralized set-up by saying people shouldn't use it for that?
BTW I was linked to that guide from @nolan 's blog piece on using a #YubiKey, which is well worth the read:
@bob @strypey @nolan But the same rules apply to hardware tokens as to physical keys (to car, house). E.g. one needs to have a spare key and a way to duplicate them / issue new ones just in case old one is lost / stolen. Then comes the question how to do it securely and it quickly becomes quite a complex scheme...
@wiktor this is true. But it's a lot simpler than figuring out how to securely use software private keys across multiple devices, including some you don't own, and only need to to use once or occasionally. With the physical key, it becomes a simple 2-step process:
1) insert key
2) login as normal
@strypey @bob @nolan Agreed about "securely" and "software private keys" not really being compatible but even though I've worked with implementation of various standards (OpenPGP Card, PIV, U2F) I wouldn't call any of them "simple 2-step process", there are always some issues here and there depending on the OS, certificate, migration of keys etc.
For the record I'm not using software private keys for years but the process isn't easy at first.
@wiktor one of the biggest advantages I see is that physical keys are a very familiar #UX. People know how to keep them safe, identify trusted people to leave spaces with, and so on, and are used to doing so. Even I find it so painfully complicated to manage #PGP keys that even if I knew anyone else who used PGP, I probably still would consider it safer to not use it and write emails as if they were all being read by the government, ETs, lizards, #MenInBlack with no eyebrows etc
If you mean login through the browser then probably WebAuthentication is what is best there. But this is only for authentication and usually used in browsers (U2F has Python lib for other uses)... just mentioning, I'm not arguing :)
@wiktor fair point. I guess the argument I'm making is that if we want to make it possible for Jo User to encrypt their email (for example), or use encrypted chat without trusting a third party like #OWS/ #Signal or #Wire, we need to create authentication / key management systems for that which are as easy to use as house keys or cars keys (or as close to that as possible).
@strypey Well Signal wouldn't agree with you (I'm just playing devil's advocate for the sake of the argument), they use key-per-device model. Besides how would you communicate with a physical token inserted in your phone? That's a UX nightmare.
My biggest pain points in using GPG via e-mail is basic stuff like Enigmail setup begin broken. That's why I can't use it easily with my accountant or friends. What does super-strong keys mean if initial setup is borked?
@wiktor I presume Signal (like Wire) do this because currently there's no easy way for users to identify themselves securely across multiple devices. A physical key could solve that. I don't know about the specifics of using USB devices with mobile devices, but I have seen people use a dongle to plug a normal USB storage drive into one. That didn't seem too complicated. Ideally the key would have multiple physical interface options (or you could have a set of keys, one for each plug style).
@strypey They use multiple keys because that way you can stop trusting one device in case you don't use it anymore (bought new phone) or lost it.
But the user identification is actually something that can be added to this scheme (e.g. PGP uses one primary key for key identification and multiple subkeys for practical purposes).
Yes, USB-C works on phones, I used Yubikey 4C that way, and there is Yubikey 5 that has NFC interface (and USB-A).
@wiktor right, so either you load the software keys for all of your services and devices onto one physical key (and have spares), or you have a physical key for each device, with the appropriate plug style, and software keys you need on that device. Maybe users could be allowed to choose one or the other, based on what makes sense for their use case. People with many devices, or who replace them frequently, might find one key (set) easier, others might have per-device simpler.
@wiktor the trick is to automate everything you possibly can, and present the user the simplest possible GUI for bits that do need human intervention. If the #UX of security software requires hacker-level knowledge and skills, using it can only ever be an elite privilege. But we made office suites and email simple enough for mere mortals to use, I'm confident we can do the same #DigitalEnvelopes / #DigitalCarKeys (or whatever metaphor you prefer ;)
@strypey But even if you have these keys how do you tie a person to a key? For example if you and I had these physical keys how do we exchange info that your key is yours and mine key is well... mine?
That's the real problem, PGP tried Web of Trust (delegated verification), Conversations.im has scanning QR codes (https://gultsch.de/trust.html)...
@strypey It's the same with XMPP, I'm quite familiar with the protocol but until Conversations.im showed up I wouldn't recommend XMPP with a straight face to my family & friends.
Software needs to be at least friendly for setup and use, if it's not no-one would use and then it doesn't matter if it's super-secure or not.
@wiktor absolutely agree :)
@strypey For the record I like your idea and I thought about giving physical keys to close acquaintances months ago but then these annoying UX issues broke my faith in the entire experiment :)
> I do not trust [Signal], and I will not exchange sensitive data with it.
So why use it at all? If you're only using it for non-sensitive data, you don't really need an encrypted chat app. It is because you have contacts you want to keep in touch with who refuse to use anything else (#NetworkEffect)?
@gentoorebel I managed to get my family (all hooked on FB Messenger) to talk to me on #Wire. Mainly by absolutely refusing to chat with them via accounts on FB or other #DataFarms. Wire has all the benefits of Signal (#FreeCode, #E2EE, user-friendly etc), but is developed but a team employed by a private company (Swiss), not controlled by one cypherpunk celebrity. Plus, server>server federation is in Wire's roadmap (Moxie has said Signal will *never* do this)
@strypey In example I remember OWS doing domain fronting to hide Signal services behind amazon or Google infrastructure to both slip under the radar of local censorship and control. How to enable end users with little technical skills and a great risk (journalists, ...) to have manageable, reasonable safety? How does federation and decentralization help here? Or even fdroid? Seems like just more complexity you need to trust.
> How does federation and decentralization help here?
if Signal was federated, users could set up their own server, under their own control, and still communicate with users on servers they can't access directly. As long as the server>server federation used standard protocols censors can't afford to block that is. Then Moxie wouldn't need to risk compromising other people's domains (AWS have threatened to boot Signal for domain fronting).
@strypey But individual target users for apps such as Signal don't have the skills to do so, in worst case they won't even be able to recognize a federated server that has been prepared for the sole purpose of spying on them. Or security issues in example caused by protocol flaws. How do you get *all* servers to update this soon enough? Again, in this case, I don't generally argue against it but ...
> But individual target users for apps such as Signal don't have the skills to do so,
So what, they just rely on Signal's domain fronting hacks?
> in worst case they won't even be able to recognize a federated server that has been prepared for the sole purpose of spying on them.
Hmm. You mean they won't be able to recognize if Signal has been set up for the sole purpose of spying on them?
@strypey Yes, exactly the latter is what I mean. Given enough money and skills, this should be easily doable. Plus, worse: Things are "easy" if you're an anonymous user, one of a few millions on some provider who doesn't know all of them. If you're on an instance hosted by people you (think you) know, this all of a sudden gets closer. In the best case, again it's just about trust. In worst case, both you and your operator are at risk.
@z428 you missed my point. If they don't have any way to know whether a self-hosted Signal server is set up to spy on them, how are they supposed to assess whether or not OWS set up Signal to spy on them? Are they supposed to just trust Moxie? Or read widely about Signal's security practice, and make the effort to understand what makes a service more or less secure? In which case they could apply that knowledge to a self-hosted server.
@z428 basically, people with sensitive secrets to communicate shouldn't be trying to do that with networked technologies unless;
a) they have the info and skills to competently assess how secure a networked technology is (either a hosted service or something they self-host)
b) they have access to someone they are sure they can trust who does
Otherwise they *will* get pwned. This is even more important if they are organizing against governments that imprison and kill dissidents.
@strypey Well... So we know plenty of ways these people shouldn't be trying, which however leaves unanswered: Given this target group, which, *right now*, is the technological choice least dangerous, assuming they're operating in a real world, know a bunch of contacts and need some means of communication that is reasonably tamper-proof, assuming there's no 100% security? What would we recommend? Right now, I see a lot of this totally lost in different people providing (mostly ...
@strypey ... valid) arguments against each others technologies, but in the end nothing practicable is essentially left. The EFF writeup on that issue ends up just in a similar way, btw: https://www.eff.org/deeplinks/2018/03/why-we-cant-give-you-recommendation
> but in the end nothing practicable is essentially left.
Right. So if communication secrets across the net is not safe with any known combination of technologies, the only sane security advice to give is "DON'T DO IT!?!". Especially, as I say, when people's lives or freedom is on the line. Yet I regularly see people (including Moxie) recommending Signal for activists, journalists, dissident, and so on, any of whom could be in that situation. This is highly irresponsible!
@z428 maybe we need to make an effort to have more nuanced conversations about this? Where we specify at the outset whether we're talking about defending the average person's privacy against passive mass surveillance, or defending dissidents against active interception attempts, or something else. Different #ThreatModels require different approaches. As the #EFF quite rightly conclude, there's no silver bullet here.
@strypey By the way: I definitely think blocking, in example, XMPP for all but a few "controlled" / audited systems would be an easy task compared to, in example, blocking Google or Amazon.Generally, I think self-hosting of any kind of service is something each jurisdiction could easily get out of the way by reasonably tight legal approaches.
@strypey I still think this comment by Moxie Marlinspike nails it, he has a few very valid points throughout the whole thread: https://github.com/LibreSignal/LibreSignal/issues/37#issuecomment-217661076
@z428 there's a lot to unpack in that comment. The dismissal of anyone who thinks #SoftwareFreedom is a necessary precondition for secure software as "cryptonerds and moralists" is notable. Once you strip out all the hyperbole and sarcasm, most of the factual claims are debunked in the proceeding comments, starting with:
@z428 I also note that this comment includes Moxie claiming that Signal is safe for ...
> all the dissidents, activists, NGOs, and journalists that I've met
This is clearly *not* the case, for reasons Drew describes in his piece, and Moxie himself says elsewhere that such people should *not* expect Signal to keep their comms safe (can't find the quote right now but I'll dig it up if you can't find it for yourself)
@strypey I don't general disagree, but I'm a bit concerned about trust, in this context, mistaking a "people thing" for something technical. There are already plenty of players you just got to trust using digital means of communication. I'm concerned that, with our current concepts of freedom and openness, this adds a lot more parties to the game which, in the end, most users just "have to trust".
@z428 there are at least three things to consider 1) is it possible to audit the security, 2) has the security been audited, 3) did the auditors do a thorough job? In order to meet the preconditions for 1), you need a) access to the source code, and b) a way to ensure that the source code you're given was actually used to compile the binaries/ installed on the server. Signal now meets a) but goes to great lengths to avoid b), which is ... fishy.
@z428 secondly, a federation of servers can be imagined as a single server made up of many parts, each of which has to communicate with every other part for the system to work as advertised. Instead of the server being a black box (like Signal), where you just have to trust what happens between client>server, in a federation you can check exactly what's being passed between servers, and how secure it is. Lots of people can check, and check each others' work.
@z428 the freedom and openess we're after is fundamentally the ability to create systems of transparency and accountability at all (eg code audits, protocol testing, cryptographic proofs, reproducible builds etc). We can argue about what exactly those systems should be, but I don't accept the argument that "trust Moxie" is sufficient substitute for any such system.