Follow

'On the General Architecture of the Peer Web' is an excellent essay by of Ind.ie, laying out a brief history of the social impacts of computers and networks with a minimum of geek jargon, and some ideas for what the future of digital network technology could hold for us:
ar.al/2019/02/13/on-the-genera

Although it doesn't affect the main thrust of his argument, I don't agree with Aral that the web was always centralized. There are still, here and there, remnants of the pre-Geocities tradition on personal homepages hosted on the person's own desktop computer in their home or office. The changes to the ISP business that made this increasingly impractical - and set the stage for the business - are documented by in 'The Digital Imprimatur'
fourmilab.ch/documents/digital

Show thread
@strypey I'd say that the web was decentralized until the web 2.0 period which began around 2004 (also the beginning of Ubuntu). After then it became increasinged centralized, with The Cloud and virtualization enabling an effective return to mainframe computing.

In the time 1994-2004 people used different email servers. There were personal websites and blogs typically hosted by the many small ISPs which started up. Although Microsoft and companies like Compuserve tried to apply the mainframe model in the mid 1990s they failed because the technology and infrastructure wasn't capable enough and they also over-curated their content making it bland compared to what existed on the "open web".

@bob according to John Walker, the centralization of the web started around 1995, with the failure to adopt IPv6. Although the term "web 2.0" wasn't coined until later, sites that fit that description started to appear in the late 90s (including the first site in 1999)
@lightweight

@bob @lightweight
"With no possibility of migrating to IPv6 in time to solve the address space crunch, the industry resigned itself to soldiering on with IPv4, adopting the following increasingly clever means of conserving the limited address space. Each of these, however, had the unintended consequence of transforming the pure peer relationship originally envisioned for the Internet into “publisher” and “consumer” classes, and increasing the anonymity of Internet access."
- John Walker

@strypey we now have commoditised OpenStack hosting... that similarly allows people to decentralise without having to rewrite their deployment tools....

@lightweight could I run on a consumer-grade desktop computer in my home / office (in my case, the same place)? If I did, could people connect to it over a consumer-grade internet connection? If not, the decentralization enabled by OpenStack is an improvement on the corporate , but still centralized around commercial , compared to the distributed early 90s web 1.0 architecture John Walker describes.

@lightweight can OpenStack work with ? If so, you might not even need a static IP. But you would need an ISP that provides sufficient upload bandwidth for a server to work effectively, and one that doesn't ban the use of servers on their consumer-grade connections and require a much more expensive server-grade connection for that. You're in a better position to know than me these days, but I'm not aware of any ISPs in Aotearoa offering that. This is a big part of what needs to change.

@strypey Pretty much all fibre connections offer a static IP (possibly at a small additional monthly cost - I pay an extra $5/month I think with Vodafone). That's also true of cable modem services (like Vodafone's misleadingly named FibreX plans)... Don't think most ADSL plans have static IPs. Not sure if DynamicDNS is acceptable for OpenStack... I'd expect it would work but probably isn't overly robust.

@lightweight @strypey

Huh, I can get static IP with 2degrees for $10/mo, that's not bad. Using Linode API currently, but fixed would be nice ...

@xurizaemon @lightweight are those IPv4 or IPv6 addresses? (or is that a silly question)

@strypey @xurizaemon Hard to say. Not sure how many ISPs assign IPv6 IPs by default these days...

@strypey @lightweight

I feel like ISPs generally don't try to block consumers running services. Dynamic DNS can be done. ToS may not *support* use of servers, but realistically that's just so they don't have to *support* it; NZ ISPs have all IMO moved towards reducing what they need to support as it's the highest cost of service. (Power companies provide power, but if they had to support appliances, they'd be charging much more.)

@xurizaemon @lightweight I haven't looked into it for a while, but when I was chief tech and bottlewasher at Oblong in Welly, trying to set up in-house hosting for activist services was one of my more ambitious goals. From what I remember, the net connection we used explicitly banned us from running servers on it (plus our bandwidth was capped), and we never achieved it.

@strypey @xurizaemon depends on your ISP and plan - I've been using my static IP for light hosting for a couple decades...

@xurizaemon @lightweight from memory, every time I've done a speed test on a standard home-or-garden net connection, the upload bandwidth has been a tiny fraction of the download bandwidth. Am I wrong in thinking these would need to be at (or close to) parity to run a server used by more than a tiny handful of people at a time?

@strypey As far as I know, you can happily use IPv6 for OpenStack implementations... thus allowing peer-to-peer without NAT obfuscation...

@strypey
In the early 1990's I had an ISP who encouraged people to run their own websites and I did so. Static pages was all I did, and if I remember correctly, that was all that I could have done.

Sign in to participate in the conversation
Mastodon - NZOSS

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!