@kate Looks like they're desperate for funds. It is alarming, and we should support ff somehow.
If ff fails, the web will remain chromium-only (except maybe caves of gemini). That would leave google alone to set all the rules. Somewhat doomsday scenario. So don't be too picky and keep supporting ff. If they fail in funding, we all will get ads in chromium address bar or something worse.
I wish Mozilla the best of luck in holding this back as long as possible, but unfortunately they find themselves having to make these compromises. I don't think they can do what it takes to save the web without losing what influence they still have...
> I It is weird how there is no political will to appreciate and fund it. At least in countries I see. Is it because there is no demand from voters? Maybe there is demand, but not yet articulated?
I tried to do my bit by emailing the tech spokespeople in NZ political parties a link to Nadia Eghbal's 'Roads and Bridges' report, with a bit of contextualizing comment about why it's important they read it.
So far just the idea has been floated, but apparently the need for it will become real. I would be very interested in this, also as an antidote to those who claim that open source is automagically commons, because most of the open sources have not been created by commoners. (see https://en.wikipedia.org/wiki/Elinor_Ostrom )
Slightly OT.. I saw that Drew Devault started working on visurf, based on NetSurf and intends to create a HTML + CSS framework specifically targeted to smaller browsers as 1st-class citizens.
The key point here is ”if you can verify the source”. This is in practice impossible, and JS is executed as the page loads. We can’t expect people to inspect the source code of every page before rendering.
I don’t see why JS is needed to implement small sites. I only use it for https://warmedal.se/~wobbly/ and even then only for a nicer UX. It could as well have been an ordinary web form.
@tinyrabbit @humanetech @gert @strypey @dudenas @alcinnz @kate @onepict It is not impossible, it’s just not possible within the confines of current browsers. Entirely possible via an extension or third-party app, etc.
We need it for Small Web (https://small-tech.org/research-and-development) because there’s no other way for you to own your own keys or ensure that your content is end-to-end encrypted.
Hm, It may make sense as a longshot. But almost nobody can perform security audit on their own. That means, you need to trust someone's agency. As I think of it, a perfect model would be the one, where I could choose my agent I trust to verify content for me.
On client side, probably most antivirus software claim to audit web content.. But I admit, I usually consider them more annoying than most viruses.
@dudenas @tinyrabbit @humanetech @gert @strypey @alcinnz @kate @onepict Well, there’s verification and there’s verification. I’m not talking about source code audit but at least verifying that the signature of the file matches what the organisation you trusts says it should be. Beyond that, yes, a bigger issue is having trusted agents that actually perform things like source code audits.
So which companies should I trust? How do I decide? What about vanilla JS in <script> tags? Small unknown libs?
I don’t trust React, Angular or a dozen others equally bloated libs no matter which CDN offer them. I don’t think we can build a trust system that can provide any security or trust in a meaningful definition of those words.
I really don’t see how the client has any meaningful control over a client side script other than deciding whether it should be executed or not.
I'm sorry, but this is pretty naive.
As I said, JS is executed in the browser before I've decided whether I trust it or not. Let's say I have a plugin where I allowlist sources. How do you suggest I keep that updated with all the hundreds of frameworks that pop up every day? And how do I deal with vanilla JS? Trust or not?
It makes more sense to block JS and only enable it on *sites* I trust. Not JS sources I trust.
We need JS code signatures and signature verification (I believe that's what @aral was talking about).
But any JS library will allow devs to do whatever JS is capable of doing, which is a lot. There's no guarantee that evilcorp.com uses the latest version of React in an ethical and -- to me -- secure way even if the source and signature check out.
Sorry, we may be talking about different things. Let's say I trust goodsite.org to run JS, and they import Angular. Then of course I want to be certain that the file imported is the actual Angular source that they intend to run, and not something malicious inserted in a supply chain attack.
I just realised that's probably what you mean, in which case we're in full agreement 😆
@Iutech @tinyrabbit @aral @dudenas @humanetech @gert @strypey @alcinnz @kate @onepict I think the ship has sailed with respect to trusted code. The only solution is to not trust any code, and just isolate everything. It's the kind of pragmatic approach taken by Qubes OS.
Even if there was a practical way to only run trusted code, that code could still have bugs which leads to security issues. Letting the code run in isolated containers neatly deals with this issue, at the expense of making intra-container communication more cumbersome (as any Qubes user will know)
From my perspective, this need exists. JS is of course a terrible way to deliver software, but regardless of technology, we still need to be able to run code from remote sources.
So yes, I agree that JS on web pages is bad, and we need a better way to deliver content. But the issue isn't security. Getting rid of JS would have a benefit for privacy though.
With regards to downloading code from the Internet, as ActiveX showed us in the 90's already, relying on trusted code simply does not work. We have to assume that anything you download is hostile, and isolation is the only solution that I know of that actually works and can be used today.
@alcinnz @Iutech @tinyrabbit @aral @dudenas @humanetech @gert @strypey @kate @onepict Nothing wrong with package managers. My issue with them is that while they provide structure to the deployment of software (especially things like Nixos, even though I'm not a fan of its actual implementation), very few of them even attempts to provide some form of isolation.
The only system that tries is Flatpak, and it's nowhere near perfect.
@humanetech Relatedly I'm actively preparing to start implementing my own visual web browser (first targetting TVs & eReaders). I've already got auditory working very nicely!
I like targetting unusual human interface devices, provides me interesting constraints to keep me disciplined...
Super great to hear! I greatly admire your efforts.
Besides constraining you to appropriate scope, I think these are wonderful places to start as on these more locked-in devices our agency is taken away in ways that are harder to overcome.
I pray every day to 6,000 gods that my dumb TV does not break down, and I have to go in search of another 2nd-hand model that is not an ad-infested surveillance capitalism nightmare.
Eghbal's report uses physical infrastructure as a metaphor for digital infrastructure, as a way to work around the limited tech knowledge of political and financial decision-makers. I haven't read the whole thing yet but I read enough to think the strategy is worth a try.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!