• 1 Post
  • 148 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • The thing to keep in mind is that there exist things which have “circumstantial value”, meaning that the usefulness of something depends on the beholder’s circumstances at some point in time. Such an object can actually have multiple valuations, as compared to goods (which have a single, calculable market value) or sentimental objects (“priceless” to their owner).

    To use an easy example, consider a sportsball ticket. Presenting it at the ballfield is redeemable for a seat to watch the game at the time and place written on the ticket. And it can be transferred – despite Ticketmaster’s best efforts – so someone else could enjoy the same. But if the ticket is unused and the game is over, then the ticket is now worthless. Or if the ticket holder doesn’t enjoy watching sportsball, their valuation of the ticket is near nil.

    So to start, the coupon book is arguable “worth” $30, $0, or somewhere in between. Not everyone will use every coupon in the book. But if using just one coupon will result in a savings of at least $1, then perhaps the holder would see net-value from that deal.

    That said, I’m of the opinion that if a donation directly results in me receiving something in return… that’s not a donation. It’s a sale or transaction dressed in the clothes of charity. Plus, KFC sends coupons in the mail for free anyway.


  • Notwithstanding the possible typo in the title, I think the question is why USA employers would prefer to offer a pension over a 401k, or vice-versa.

    For reference, a pension is also known as a defined benefit plan, since an individual has paid into the plan for the minimum amount will be entitled to some known amount of benefit, usually in the form of a fixed stipend for the remainder of their life, and sometimes also health insurance coverage. USA’s Social Security system is also sometimes called the public pension, because it does in-fact pay a stipend in old age and requires a certain amount of payments into the fund during one’s working years.

    Whereas a 401k is uncreatively named after the tax code section which authorized its existence, initially being a deferred compensation mechanism – aka a way to spread one’s income over more time, to reduce the personal taxes owed in a given year – and then grew into the tax-advantaged defined contribution plan that it is today. That is, it is a vessel for saving money, encouraged by tax advantages and by employer contributions, if any.

    The superficial view is that 401k plans overtook pensions because companies wouldn’t have to contribute much (or anything at all), shifting retirement costs entirely onto workers. But this is ahistorical since initial 401k plans offered extremely generous employer contribution rates, some approaching 15% matching. Of course, the reasoning then was that the tax savings for the company would exceed that, and so it was a way to increase compensation for top talent. In the 80s and 90# was when the 401k was only just taking hold as a fringe benefit, so you had to have a fairly cushy job to have access to a 401k plan.

    Another popular viewpoint is that workers prefer 401k plans because it’s more easily inspectable than a massive pension fund, and history has shown how pension funds can be mismanaged into non-existent. This is somewhat true, if US States’ teacher pension funds are any indication, although Ontario Teacher’s Pension Plan would be the counterpoint. Also, the 401k plan participants at Enron would have something to complain about, as most of the workers funds were invested in the company itself, delivering a double whammy: no job, and no retirement fund.

    It is my opinion that the explosion of 401k plans and participants in such plans – to the point that some US states are enacting automatic 401k plans for workers whose employers don’t offer one – is due to 1) momentum, since more and more employers keep offering them, 2) but more importantly, because brokers and exchanges love managing them.

    This is the crux: only employers can legally operate a 401k plan for their employees to participate in. But unless the employer is already a stock trading platform, they are usually ill-equiped to set up an integrated platform that allows workers to choose from a menu of investments which meet the guidelines from the US DOL, plus all other manner of regulatory requirements. Instead, even the largest employers will partner with a financial services company who has expertise on offering a 401k plan, such as Vanguard, Fidelity, Merrill Edge.

    Naturally, they’ll take a cut on every trade or somehow get compensated, but because of the volume of 401k investments – most people auto-invest every paycheck – even small percentages add up quickly. And so, just like the explosion of retail investment where ordinary people could try their hand at day-trading, it’s no surprise that brokerages would want to extend their hand to the high volume business of operating 401k plans.

    Whereas, how would they make money off a pension fund? Pension funds are multi-billion dollar funds, so they can afford their own brokers to directly buy a whole company in one-shot, with no repeat business.


  • Although copyright and patents (and trademarks) are lumped together as “intellectual property”, there’s almost nothing which is broadly applicable to them all, and they might as well be considered separately. The only things I can think of – and I’m not a lawyer if any kind – are that: 1) IP protection is mentioned broadly in the US Constitution, and 2) they all behave as property, in that they can be traded/reassigned. That’s it.

    With that out of the way, it’s important to keep in mind that patent rights are probably the strongest in the family of IP, since there’s no equivalent “fair use” (US) or “fair dealing” (UK) allowance that copyright has. A patent is almost like owning an idea, whereas copyright is akin to owning a certain rendition plus a derivative right.

    Disney has leaned on copyright to carve for themselves an exclusive market of Disney characters, while also occasionally renewing their older characters (aka derivatives), so that’s why they lobby for longer copyright terms.

    Whereas there isn’t really a singular behemoth company whose bread-and-butter business is to churn out patents. Inventing stuff is hard, and so the lack of such a major player means a lack of lobbying to extend patent terms.

    To be clear, there are companies who rely almost entirely on patent law for their existence, just like Disney relies on copyright law. But type foundries (companies that make fonts) are just plainly different than Disney. Typefaces (aka fonts) as a design can be granted patents, and then the font files can be granted copyright. But this is a special case, I think.

    The point is: no one’s really clamoring for longer parents, and most people would regard a longer term on “ideas” to be very problematic. Esp if it meant pharmaceutical companies could engage in even more price-gouging, for example.


  • If you hold a patent, then you have an exclusive right to that invention for a fixed period, which would be 20 years from the filing date in the USA. That would mean Ford could not claim the same or a derivative invention, at least not for the parts which overlap with your patent. So yes, you could sit on your patent and do nothing until it expires, with some caveats.

    But as a practical matter, the necessary background research, the application itself, and the defense of a patent just to sit on it would be very expensive, with no apparent revenue stream to pay for it. I haven’t looked up what sort of patent Ford obtained (or maybe they’ve merely started the application) but patents are very long and technical, requiring whole teams of lawyers to draft properly.

    For their patent to be valid, they must not overlap with an existing claim, as well as being novel and non-obvious, among other requirements. They would only do this to: 1) protect themselves from competition in future, 2) expect that this patent can be monetized by directly implementing it, or licensing it out to others, or becoming a patent troll and extracting nuisance-value settlements, or 3) because they’re already so deep in the Intellectual Property land-grab that they must continue to participate by obtaining outlandish patents. The latter is a form of “publish or perish” and allows them to appear like they’re on the cutting edge of innovation.

    A patent can become invalidated if it is not sufficiently defended. This means that if no one even attempts to infringe, then your patent would be fine. But if someone does, then you must file suit or negotiate a license with them, or else they can challenge the validity of your patent. If they win, you’ll lose your exclusive rights and they can implement the invention after-all. This is not cheap.


  • I’ll address your question in two parts: 1) is it redundant to store both the IP subnet and its subnet mask, and 2) why doesn’t the router store only the bits necessary to make the routing decision.

    Prior to the introduction of CIDR – which came with the “slash” notation, like /8 for the 10.0.0.0 RFC1918 private IPv4 subnet range – subnets would genuinely be any bit arrangement imaginable. The most sensible would be to have contiguous MSBit-justified subnet masks, such as 255.0.0.0. But the standard did not preclude using something unconventional like 255.0.0.1.

    For those confused what a 255.0.0.1 subnet mask would do – and to be clear, a lot of software might prove unable to handle this – this is describing a subnet with 2^23 addresses, where the LSBit must match the IP subnet. So if your IP subnet was 10.0.0.0, then only even numbered addresses are part of that subnet. And if the IP subnet is 10.0.0.1, then that only covers odd numbered addresses.

    Yes, that means two machines with addresses 10.69.3.3 and 10.69.3.4 aren’t on the same subnet. This would not be allowed when using CIDR, as contiguous set bits are required with CIDR.

    So in answer to the first question, CIDR imposed a stricter (and sensible) limit on valid IP subnet/mask combinations, so if CIDR cannot be assumed, then it would be required to store both of the IP subnet and the subnet mask, since mask bits might not be contiguous.

    For all modern hardware in the last 15-20 years, CIDR subnets are basically assumed. So this is really a non-issue.

    For the second question, the router does in-fact store only the necessary bits to match the routing table entry, at least for hardware appliances. Routers use what’s known as a TCAM memory for routing tables, where the bitwise AND operation can be performed, but with a twist.

    Suppose we’re storing a route for 10.0.42.0/24. The subnet size indicates that the first 24 bits must match a prospective destination IP address. And the remaining 8 bits don’t matter. TCAMs can store 1’s and 0’s, but also X’s (aka “don’t cares”) which means those bits don’t have to match. So in this case, the TCAM entry will mirror the route’s first 24 bits, then populate the rest with X’s. And this will precisely match the intended route.

    As a practical matter then, the TCAM must still be as wide as the longest possible route, which is 32 bits for IPv4 and 128 bits for IPv6. Yes, I suppose some savings could be made if a CIDR-only TCAM could conserve the X bits, but this makes little difference in practice and it’s generally easier to design the TCAM for max width anyway, even though non-CIDR isn’t supported on most routing hardware anymore.


  • To start off, I’m sorry to hear that you’re not receiving the healthcare you need. I recognize that these words on a screen aren’t going to solve any concrete problems, but in the interest of a fuller comprehension of the USA healthcare system, I will try to offer an answer/opinion to your question that goes into further depth than simply “capitalism” or “money and profit” or “greed”.

    What are my qualifications? Absolutely none, whatsoever. Although I did previously write a well-received answer in this community about the USA health insurance system, which may provide some background for what follows.

    In short, the USA healthcare system is a hodge-podge of disparate insurers and government entities (collectively “payers”), and doctors, hospitals, clinics, ambulances, and more government entities (collectively “providers”), overseen by separate authorities in each of the 50 US States, territories, tribes, and certain federal departments (collectively “regulators”). There is virtually no national-scale vertical integration in any sense, meaning that no single or large entity has the viewpoint necessary to thoroughly review the systemic issues in this “system”, nor is there the visionary leadership from within the system to even begin addressing its problems.

    It is my opinion that by bolting-on short-term solutions without a solid long-term basis, the nation was slowly led to the present dysfunction, akin to boiling a frog. And this need not be through malice or incompetence, since it can be shown that even the most well-intentioned entities in this sordid and intricate pantomime cannot overcome the pressures which this system creates. Even when there are apparent winners like filthy-rich plastic surgeons or research hospitals brimming with talented expert doctors of their specialty, know that the toll they paid was heavy and worse than it had to be.

    That’s not to say you should have pity on all such players in this machine. Rather, I wish to point to what I’ll call “procedural ossification”, as my field of computer science has a term known as “protocol ossification” that originally borrowed the term from orthopedia, or the study of bone deformities. How very fitting for this discussion.

    I define procedural ossification as the loss of flexibility in some existing process, such that rather than performing the process in pursuit of a larger goal, the process itself becomes the goal, a mindless, rote machine where the crank is turned and the results come out, even though this wasn’t what was idealized. To some, this will harken to bureaucracy in government, where pushing papers and forms may seem more important that actually solving real, pressing issues.

    I posit to you that the USA healthcare system suffers from procedural ossification, as many/most of the players have no choice but to participate as cogs in the machine, and that we’ve now entirely missed the intended goal of providing for the health of people. To be an altruistic player is to be penalized by the crushing weight of practicalities.

    What do I base this on? If we look at a simple doctor’s office, maybe somewhere in middle America, we might find the staff composed of a lead doctor – it’s her private practice, after all – some Registered Nurses, administrative staff, a technician, and an office manager. Each of these people have particular tasks to make just this single doctor’s office work. Whether it’s supervising the medical operations (the doctor) or operating/maintaining the X-ray machine (technician) or cutting the checks to pay the building rent (office manager), you do need all these roles to make a functioning, small doctor’s office.

    How is this organization funded? In my prior comment about USA health insurance, there was a slide which showed the convoluted money flows from payers to providers, which I’ve included below. What’s missing from this picture is how even with huge injections of money, bad process will lead to bad outcomes.

    financial flow in the US healthcare system Source

    In an ideal doctor’s office, every patient that walks in would be treated so that their health issues are managed properly, whether that’s fully curing the condition or controlling it to not get any worse. Payment would be conditioned upon the treatment being successful and within standard variances for the cost of such treatment, such as covering all tests to rule out contributing factors, repeat visits to reassess the patient’s condition, and outside collaboration with other doctors to devise a thorough plan.

    That’s the ideal, and what we have in the USA is an ossified version of that, horribly contorted and in need of help. Everything done in a doctor’s office is tracked with a “CPT/HCPCS code”, which identifies the type of service rendered. That, in and of itself, could be compatible with the ideal doctor’s office, but the reality is that the codes control payment as hard rules, not considering “reasonable variances” that may have arisen. When you have whole professions dedicated to properly “coding” procedures so an insurer or Medicare will pay reimbursement, that’s when we’ve entirely lost the point and grosdly departed from the ideal. The payment tail wags the doctor dog.

    To be clear, the coding system is well intentioned. It’s just that its use has been institutionalized into only ever paying out if and only if a specific service was rendered, with zero consideration for whether this actually advanced the patient’s treatment. The coding system provides a wealth of directly-comparable statistical data, if we wanted to use that data to help reform the system. But that hasn’t substantially happened, and when you have fee-for-service (FFS) as the base assumption, of course patient care drops down the priority list. Truly, the acronym is very fitting.

    Even if the lead doctor at this hypothetical wanted to place patient health at the absolute forefront of her practice, she will be without the necessary tools to properly diagnose and treat the patient, if she cannot immediately or later obtain reimbursement for the necessary services rendered. She and her practice would have to absorb costs that a “conforming” doctor’s office would have, and that puts her at a further disadvantage.

    The only major profession that I’m immediately aware of which undertakes unknown costs with regularity, in the hopes of a later full-and-worthwhile reimbursement, is the legal profession. There, it is the norm for personal injury lawyers to take cases on contingency, meaning that the lawyer will eat all the costs if the lawsuit does not ultimately prevail. But if the lawyer succeeds, then they earn a fixed percentage of the settlement or court judgement, typically 15-22%, to compensate for the risk of taking the case on contingency.

    What’s particularly notable is that lawyers must have a good eye to only accept cases they can reasonably win, and to decline cases which are marginal or unlikely to cover costs. This hereustic takes time to hone, but a lawyer could start by being conservative with cases accepted. The reason I mention this is because a doctor-patient relationship is not at all as transactional as a lawyer-client relationship. A doctor should not drop a patient because their health issues won’t allow the doctor to recoup costs.

    The notion that an altruistic doctor’s office can exist sustainably under the FFS model would require said doctor to discard the final shred of decency that we still have in this dysfunctional system. This is wrong in a laissez-faire viewpoint, is wrong in a moral viewpoint, and is wrong in a healthcare viewpoint. Everything about this is wrong.

    But the most insidious problems are those that perpetuate themselves. And because of all those aforementioned payers, providers, and regulators are merely existing and cannot themselves take the initiative to unwind this mess, it’s going to take more than a nudge from outside to make actual changes.

    As I concluded my prior answer on USA health insurance, I noted that Congressional or state-level legislation would be necessary to deal with spiraling costs for healthcare. I believe the same would be required to refocus the nation’s healthcare procedures to put patient care back as the primary objective. This could come in the form of a single-payer model. Or by eschewing insurance pools outright by extending a government obligation to the health of the citizenry, commonly in the form of a universal healthcare system. Costs of the system would become a budgetary line-item so that the health department can focus its energy on care.

    To be clear, the costs still have to be borne, but rather than fighting for reimbursement, it could be made into a form of mandatory spending, meaning that they are already authorized to be paid from the Treasury on an ongoing basis. For reference, the federal Medicare health insurance system (for people over 65) is already a mandatory spending obligation. So upgrading Medicare to universal old-people healthcare is not that far of a stretch, nor would further extending it to cover every person in the country.



  • Thank you for that detailed description. I see two things which are of concern: the first is the IPv6 network unreachable. The second is the lost IPv4 connection, as opposed to a rejection.

    So staring in order, the machine on the external network that you’re running curl on, does it have a working IPv6 stack? As in, if you opened a web browser to https://test-ipv6.com/ , does it pass all or most tests? An immediate “network is unreachable” suggests that external machine doesn’t have IPv6 connectivity, which doesn’t help debug what’s going on with the services.

    Also, you said that all services that aren’t on port 80 or 443 are working when viewed externally, but do you know if that was with IPv4 or IPv6? I use a browser extension called IPvFoo to display which protocol the page has loaded with, available for Chrome and Firefox. I would check that your services are working over IPv6 equally well as IPv4.

    Now for the second issue. Since you said all services except those on port 80, 443 are reachable externally, that would mean the IP address – v4 or v6, whichever one worked – is reachable but specifically ports 80 and 443 did not.

    On a local network, the norm (for properly administered networks) is for OS firewalls to REJECT unwanted traffic – I’m using all-caps simply because that’s what I learned from Linux IP tables. A REJECT means that the packet was discarded by the firewall, and then an ICMP notification is sent back to the original sender, indicating that the firewall didn’t want it and the sender can stop waiting for a reply.

    For WANs, though, the norm is for an external-facing firewall to DROP unwanted traffic. The distinction is that DROPping is silent, whereas REJECT sends the notification. For port forwarding to work, both the firewall on your router and the firewall on your server must permit ports 80 and 443 through. It is a very rare network that blocks outbound ICMP messages from a LAN device to the Internet.

    With all that said, I’m led to believe that your router’s firewall is not honoring your port-forward setting. Because if it did and your server’s firewall discarded the packet, it probably would have been a REJECT, not a silent drop. But curl showed your connection timed out, which usually means no notifications was received.

    This is merely circumstantial, since there are some OS’s that will DROP even on the LAN, based on misguided and improper threat modeling. But you will want to focus on the router’s firewall, as one thing routers often do is intercept ports 80 and 443 for the router’s own web UI. Thus, you have to make sure there aren’t such hidden rules that preempt the port-forwarding table.


  • I’m still trying to understand exactly what you do have working. You have other services exposed by port numbers, and they’re accessible in the form <user>.ducksns.org:<port> with no problems there. And then you have Jellyfin, which you’re able to access at home using https://jellyfin.<user>.duckdns.org without problems.

    But the moment you try accessing that same URL from an external network, it doesn’t work. Even if you use HTTP with no S, it still doesn’t connect. Do I understand that correctly?


  • I know this is c/programmerhumor but I’ll take a stab at the question. If I may broaden the question to include collectively the set of software engineers, programmers, and (from a mainframe era) operators – but will still use “programmers” for brevity – then we can find examples of all sorts of other roles being taken over by computers or subsumed as part of a different worker’s job description. So it shouldn’t really be surprising that the job of programmer would also be partially offloaded.

    The classic example of computer-induced obsolescence is the job of typist, where a large organization would employ staff to operate typewriters to convert hand-written memos into typed documents. Helped by the availability of word processors – no, not the software but a standalone appliance – and then the personal computer, the expectation moved to where knowledge workers have to type their own documents.

    If we look to some of the earliest analog computers, built to compute differential equations such as for weather and flow analysis, a small team of people would be needed to operate and interpret the results for the research staff. But nowadays, researchers are expected to crunch their own numbers, possibly aided by a statistics or data analyst expert, but they’re still working in R or Python, as opposed to a dedicated person or team that sets up the analysis program.

    In that sense, the job of setting up tasks to run on a computer – that is, the old definition of “programming” the machine – has moved to the users. But alleviating the burden on programmers isn’t always going to be viewed as obsolescence. Otherwise, we’d say that tab-complete is making human-typing obsolete lol



  • My last post didn’t substantially address smaller ISPs, and from your description, it does sound like your ISP might be a smaller operator. But essentially, on the backend, a smaller ISP won’t have the customer base to balance their traffic in both directions. But they still need to provision for peak traffic demand, and as you observed, that could mean leaving capacity on the table, err fibre. This is correct from a technical perspective.

    But now we touch up on the business side of things again. The hypothetical small ISP – which I’ll call the Retail ISP, since they are the face that works with end-user residential customers – will usually contract with one of more regional ISPs in the area for IP transit. That is, upstream connectivity to the broader Internet.

    It would indeed be wasteful and expensive to obtain an upstream connection that guarantees 40 Gbps symmetric at all times. So they don’t. Instead, the Retail ISP would pursue a bustable billing contract, where they commit to specific, continual, averaged traffic rates in each direction, but have some flexibility to use more or less than that commited value.

    So even if the Retail ISP is guaranteeing each end-user at least 40 Gbps download, the Retail ISP must write up a deal with the Upstream ISP based on averages. And with, say, 1000 customers, the law of averages will hold true. So let’s say the average rates are actually 20 Gbps down/1 Gbps up.

    To be statistically rigorous though, I should mention that traffic estimation is a science, with applicability to everything from data network and road traffic planning, queuing for the bar at a music venue, and managing electric grid stability. Looking at historical data to determine a weighed average would be somewhat straightforward, but compensating for variables so that it can become future-predictive is the stuff of statisticians with post-nominative degrees.

    What I can say though, from what I remember in calculus at uni, is that if each end-user’s traffic rates are independent from other end-users (a proposition that is usually true but not necessarily at all times of day), then the Central Limit Theorem states that the distribution of the aggregate set of end-users will approximate a normal distribution (aka Gaussian, or bell curve), getting closer for more users. This was a staggering result when I first learned it, because it really doesn’t matter what each user is doing, it all becomes a bell curve in the end.

    The Retail ISP’s contract with the Upstream ISP probably has two parts: a circuit, and transit. The circuit is the physical line, and for the given traffic, probably a 50 Gbps fibre connection might be provisioned for lots of burstable bandwidth. But if the Retail ISP is somewhat remote, perhaps a microwave RF link could be set up, or leased from a third-party. But we’ll stick with fibre, as that’s going to be symmetrical.

    As a brief aside, even though a 40 Gbps circuit would also be sufficient, sometimes the Upstream ISP’s nearby equipment doesn’t support certain speeds. If the circuit is Ethernet based, then a 40 Gbps QSFP+ circuit is internally four 10 Gbps links bundles together on the same fibre line. But supposing the Upstream ISP normally sells 200 Gbps circuits, then 50 Gbps to the Retail ISP makes more sense, as a 200 Gbps QSFP56 circuit is internally made from four 50 Gbps, which oftentimes can be broken out. The Upstream and Retail ISPs need to agree on the technical specs for the circuit, but it certainly must provide overhead beyond the averages agreed upon.

    And those averages are captured in the transit contract, where brief exceedances/underages are not penalized but prolonged conditions would be subject to fees or even result in new contract negotiations. The “waste” of circuit capacity (especially upload) is something both the Retail ISP (who saves money, since guaranteed 50 Gbps would cost much more) and the Upstream ISP willingly accept.

    Why? Because the Upstream ISP is also trying to balance the traffic to their upstream, to avoid fees for imbalance. So even though the Retail ISP can’t guarantee symmetric traffic to the Upstream ISP, what the Retail ISP can offer is predictability.

    If the Upstream ISP can group the Retail ISP’s traffic with a nearby data center, then that could roughly balance out, and allow them to pursue better terms with the subsequent higher tier of upstream provider.

    Now we can finally circle back on why the Retail ISP would decline to offer end-users some faster upload speeds. Simply put, the Retail ISP may be aware that even if they offer higher upload, most residential customers won’t really take advantage of it, even if it was a free upgrade. This is the reality of residential Internet traffic. Indeed, the unique ISPs in the USA offering residential 10 Gbps connections have to be thoroughly aware that even the most dedicated of, err, Linux ISO afficionados cannot saturate that connection for more than a few hours per month.

    But if most won’t take advantage of it, then that shouldn’t impact the Retail ISP’s burstable contract with the Upstream ISP, and so it’s a free choice, right? Well, yes, but it’s not the only consideration. The thing about offering more upload is that while most customers won’t use it, a small handful will. And maybe those customers are the type that will complain loudly if the faster upload isn’t honored. And that might hurt Retail ISP’s reputation. So rather than take that gamble through guaranteeing faster upload for residential connections, they’d prefer to just make it “best effort”, whatever that means.

    EDIT: The description above sounds a bit defeatist for people who just want faster upload, since it seems that ISPs just want to do the bare minimum and not cater to users who are self-hosting, whom ISPs believe to be a minority. So I wanted to briefly – and I’m aware that I’m long winded – describe what it would take to change that assumption.

    Essentially, existing “average joe” users would have to start uploading a lot more than they are now. With so-called cloud services, it might seem that upload should go up, if everyone’s photos are stored on remote servers. But cloud services also power major sites like Netflix, which are larger download sources. So net-net, I would guess that the residential customer’s download-to-upload ratio is growing wider, and isn’t shrinking.

    It would take a monumental change in networking or computer or consumer demand to reverse this tide. Example: a world where data sovereignty – bonafide ownership of your own data – is so paramount that everyone and their mother has a social-media server at home that mutually relays and amplifies viral content. That is to say, self-hosting and upload amplification.


  • Historically, last-mile technologies like dial-up, DSL, satellite, and DOCSIS/cable had limitations on their uplink power. That is, the amount of energy they can use to send upload through the medium.

    Dial-up and DSL had to comply with rules on telephone equipment, which I believe limited end-user equipment to less power than what the phone company can put onto the wires, premised on the phone company being better positioned to identify and manage interference between different phone lines. Generally, using reduced power reduces signal-to-noise ratio, which means less theoretical and practical bandwidth available for the upstream direction.

    Cable has a similar restriction, because cable plants could not permit end-user “back feeding” of the cable system. To make cable modems work, some amount of power must be allowed to travel upstream, but too much would potentially cause interference to other customers. Hence, regulatory restrictions on upstream power. This also matched actual customer usage patterns at the time.

    Satellite is more straightforward: satellite dishes on earth are kinda tiny compared to the bus-sized satellite’s antennae. So sending RF up to space is just harder than receiving it.

    Whereas fibre has a huge amount of bandwidth, to the point that when new PON standards are written, they don’t even bother reusing the old standard’s allocated wavelength, but define new wavelengths. That way, both old and new services can operate on the fibre during the switchover period. So fibre by-default allocates symmetrical bandwidth, although some PON systems might still be closer to cable’s asymmetry.

    But there’s also the backend side of things: if a major ISP only served residential customers, who predominantly have asymmetric traffic patterns, then they will likely have to pay money to peer with other ISPs, because of the disparity. Major ISPs solve this by offering services to data centers, which generally are asymmetric but tilted towards upload. By balancing residential with server customers, the ISP can obtain cheaper or even free peering with other ISPs, because symmetrical traffic would benefit both and improve the network.


  • You are correct: even when you have a live body on the stand about to give testimony, it is essential to lay the foundation as to who they are and their legitimacy. Obviously, if they aren’t who they say they are, that’s a huge problem. So the party who called the witness will have done their homework in advance, and the opposing lawyers will have been notified in advance of this witness’s appearance and conduct their own homework.

    For when a person is testifying but they aren’t in the room, I understand that there are several requirements that a telepresence system must comply with, both technical and usability. Certainly, someone’s visage or image would be preferable to an audio-only phone call. Presumably, the jury needs to trust this witness to believe them or else it’s rather pointless. Nowadays, with deep fakes and AI, it could possibly become an issue if video appearances in court are actually faked, or if the suggestion becomes plausible due to advancements in the technology.

    So if we think of the zombie not as a live body but someone whose presence is being facilitated by the necromancer’s abilities, then the necromancer must be quizzed as to the veracity of their abilities, and the court would have to question what limits must be imposed on the testimony to make it admissible.

    If it’s anything like the bunk science that courts have previously adopted – bite mark analysis comes to mind – then it only takes one court to permit necromancy and other courts will point to that one case as precedence. This would only be a problem if the necromancy is flawed in some serious way.


  • I’m not a lawyer, but let’s have some fun with this.

    To start, I’m going to have to assume a jurisdiction. I’ll go with California, because Hollywood films have depicted a lot of walking dead, zombies, and whatnot. And also because that’s the jurisdiction I’m most familiar with. I think that such a case where the undead might be a witness would mostly arise in California state courts, since zombies rarely walk/jump/crawl quickly enough to cross state lines from the major population centers of California, which wokld invoke federal jurisdiction.

    Now, we need to hone in on the type of case. A murder case where the victim is called as a witness would certainly be very juicy. But the same legal intrigue would arise from a less-interesting inheritance or family law case. We could also go into contracts and see whether or not the presence of an undead counts as an “act of God” but maybe that’s a bit too niche and law-school theoretical.

    To really showcase the problems this would pose to the court, we will focus on the undead being witness in a criminal trial, as the standard of proof to convict the defendant would be proof “beyond a reasonable doubt”. As the most stringent category of proof, it necessarily follows that the court must err on the side of the defendant in matters of impartiality. This is because the court is technically an arm of the state, and the prosecution wields all the resources of the state against an individual who stands accused of some criminal act.

    As such, for criminal trials, there are certain constitutional rights of the defendant that the court must uphold. The foremost is the right to due process, guaranteed by the Fifth and Fourteenth Amendments. One of the results from applying due process is that evidence introduced in a criminal trial must not be “unduly prejudicial”. That is, no evidence can be admitted which so irresponsibly causes the jury to render a verdict based on anything but the law.

    Often, this rule is invoked to set aside irrelevant evidence which has no bearing on the charges, except maybe to impune the reputation of the defendant so that the jury thinks they’re a terrible person. Other times, it can be used to exclude relevant but really-bad evidence. The US courts have been through cycles where novel science is used in a prosecution but which later turns out to be bunk and lacking any foundation in reality. It certainly is “evidence” but because it purports to be science when it’s really not, it must be excluded. Psychics are certainly not going to be welcomed witnesses as a subject matter of expert.

    Finally, the other category for evidence being unduly prejudicial is when the jury – through no fault of their own – would weigh that evidence as being the primary factor, above all else, whether it’s DNA or video evidence. This is more a matter of testimony evidence rather than physical evidence. Imagine a small, devoutly religious town where the local pastor is called to testify about whether the defendant could have committed hit-and-run.

    Having a respected community authority figure testify about someone’s potential to commit a crime might be something the jury members would be open to hearing, but the judge might have to weigh whether the fact the fact that the lay witness is a pastor will cause the jury to put too much weight on that testimony. If there are other ways to obtain the same evidence – such as bringing in the defendant’s mother or employer – the judge should not allow the pastor to testify, because it could jeopardize the soundness of the trial and lead to an appeal.

    So now we come back to zombies. Would a jury be able to set aside their shock, horror, and awe about a zombie in court that they could focus on being the finder of fact? If a zombie says they’re an eye-witness to a mugging, would their lack of actual eyeballs confuse the jury? Even more confusing would be a zombie that is testifying as an expert witness. Does their subject matter need to be recent? What if the case needs an expert on 17th Century Parisian fashion and the undead is from that era? Are there no fashion historians who could provide similar expert opinions?

    But supposing we did overcome all that, there might be one form of testimony which – even though very prejudicial – might be allowable for a lay-witness (ie not expert) zombie witness could testify about, and I already mentioned it earlier.

    In most jurisdictions and in California, a dying person’s last act which might point to their killer will not necessarily be excluded for being irrelevant or being circumstantial. It is a rebuttable presumption that someone dying has no incentive to lie, and will likely have been the final witness to their own murder.

    To that end, it’s entirely plausible that a zombie who died by murder could come to court to testify against their killer. Of course, how long does it take for the dead to become undead? If this takes longer than the statue of limitations allows, the defendant would walk. Likewise, if the zombie’s testimony is the only shred of evidence for the murder, that’s not likely to convince the jury. Not unless, of course, the details of the testimony match the circumstances of the crime so well that it wasn’t a fluke.

    TL;DR: rules of evidence would still apply to the undead, and judges must take care to balance the probative value of evidence with any prejudicial quality it may carry.





  • Looking at the diagram, I don’t see any issue with the network topology. And the power arrangement also shouldn’t be a problem, unless you require the camera/DVR setup to persist during a power cut.

    In that scenario, you would have to provide UPS power to all of: the PoE switch, the L3 switch, and the NVR. But if you don’t have such a requirement, then I don’t see a problem here.

    Also, I hope you’re doing well now.


  • If you’ll permit me to broaden the question to “why are political subdivisions allowed to sue each other?”, then the answer often is two-fold: 1) political subdivisions are incorporated entities under the law, so they have a right to pursue redress in front of a higher court, and 2) when the higher power is unclear about the division of rights to the subdivisions, then only a court can dispense the issue.

    For #1, this is the same power which allows a city, county, municipality, special district, state, and sometimes the federal government to obtain an enforcement order against an individual or company. An example would be an injunction to stop dumping more toxic waste into a river. It should be clear that if a city, county, or state was dumping toxic material into a river, the higher level of government would want to stop that too.

    For #2, ambiguity is rife when it comes to poorly drafted legislation or decisions which “passed the buck” far into the future. Historical examples involving borders include the British Partition of India or the Delaware Wedge, the latter which was in dispute for nearly 300 years. You can also find examples in international law, such as whether or not certain islands count as territory for the purpose of extending a country’s Exclusive Economic Zone.

    In the Delaware Wedge case, because the matter involved three or four US States, the matter would ultimately have to be adjudicated by a federal court, either directly before the US Supreme Court or through arbitration under the auspices of the court. Alternatively, Congress potentially could have settled the matter forthright, but since the dispute predates the founding the union, Congress probably thought the states would quickly work it out on their own.

    Here in California, we see some similar misgivings between the state’s own political subdivisions, with a recent example where a county District Attorney brought suit against the most populous city within that county, alleging that state law was being violated.

    As for how a county is allowed to prosecute a state law violation, and why a city can be a target or such prosecution, we need to briefly look at the structure of California governance. Despite what some critics have suggested, California is not a homogenous, unitary state with a singular political and social identity. Rather, it may be one of the most decentralized states in the union, with cities with populations in the low hundreds to the low millions, all coexisting within one set of general state laws.

    The state’s primary subdivisions are the counties, which divide all the land into 58 counties. Counties are responsible to citizens within their borders, authorized to write and enforce laws, except that county laws don’t apply within incorporated borders. That is, cities.

    In essence, the incorporation of a city creates an enclave within a county, and while the state limits what categories of laws a county may author, cities have much more “home rule” authority. This is what allows the City of Los Angeles (pop. 3.8 million) and Amador City (pop. 200) to have similar powers yet clearly applied much differently. It would be a madhouse in the state Legislature if every city needed custom legislation to enable them to serve their people appropriately. So California just lets the cities do their own thing, within reason.

    In terms of enforcement, to prevent overworking the state Attorney General, enforcement of the state’s laws are delegated to the county District Attorneys. These 58 attorneys wield the power of the state within their county borders, such as brokering a plea deal or bringing enforcement lawsuits.

    The safeguard is that the state Attorney General can – at any time – take over an ongoing prosecution from the county DA. For example, investigations involving city police misconduct are now by-default taken away from the county DA and investigated by the state AG, because of a historical pattern of police being too cozy with the DA.

    In the earlier case where the county sued the city within in, the state AG could have also taken that case away. But seeing as the case was already slipshod, the AG probably just decided to let it run its course, where a judge would likely dismiss it.

    TL;DR: political subdivisions do weird things if no guardrails exist or if no other alternative appeara