• 0 Posts
  • 17 Comments
Joined 5 months ago
cake
Cake day: May 29th, 2024

help-circle








  • This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

    I do think the complexity of artificial neural networks is overstated. A real neuron is a lot more complex than an artificial one, and real neurons are not simply feed forward like ANNs (which have to be because they are trained using back-propagation), but instead have their own spontaneous activity (which kinda implies that real neural networks don’t learn using stochastic gradient descent with back-propagation). But to say that there’s nothing at all comparable between the way humans learn and the way ANNs learn is wrong IMO.

    If you read books such as V.S. Ramachandran and Sandra Blakeslee’s Phantoms in the Brain or Oliver Sacks’ The Man Who Mistook His Wife For a Hat you will see lots of descriptions of patients with anosognosia brought on by brain injury. These are people who, for example, are unable to see but also incapable of recognizing this inability. If you ask them to describe what they see in front of them they will make something up on the spot (in a process called confabulation) and not realize they’ve done it. They’ll tell you what they’ve made up while believing that they’re telling the truth. (Vision is just one example, anosognosia can manifest in many different cognitive domains).

    It is V.S Ramachandran’s belief that there are two processes that occur in the Brain, a confabulator (or “yes man” so to speak) and an anomaly detector (or “critic”). The yes-man’s job is to offer up explanations for sensory input that fit within the existing mental model of the world, whereas the critic’s job is to advocate for changing the world-model to fit the sensory input. In patients with anosognosia something has gone wrong in the connection between the critic and the yes man in a particular cognitive domain, and as a result the yes-man is the only one doing any work. Even in a healthy brain you can see the effects of the interplay between these two processes, such as with the placebo effect and in hallucinations brought on by sensory deprivation.

    I think ANNs in general and LLMs in particular are similar to the yes-man process, but lack a critic to go along with it.

    What implications does that have on copyright law? I don’t know. Real neurons in a petri dish have already been trained to play games like DOOM and control the yoke of a simulated airplane. If they were trained instead to somehow draw pictures what would the legal implications of that be?

    There’s a belief that laws and political systems are derived from some sort of deep philosophical insight, but I think most of the time they’re really just whatever works in practice. So, what I’m trying to say is that we can just agree that what OpenAI does is bad and should be illegal without having to come up with a moral imperative that forces us to ban it.



  • But the fact that even just a single rail car holds 360 commuters, equivalent to 180 cars or more on the highway changes the math completely.

    Absolutely. The fact that 3 million people pass through Shinjuku station every day is a testament to that.

    If all of those people lived in a city in the US it would be the country’s third largest, behind NY and LA. (If we’re going by the entire urban area instead of just within city limits it would be the 20th, just ahead of the Baltimore-Columbia-Towson metropolitan statistical area.)

    All in a space that’s smaller than most highway interchanges.

    And that’s not even using two-level train cars (which is where your figure for 360 people per train car comes from I think?).


  • While things like merging movements and so on is part of the story, it’s not the whole story.

    You see, by saying “traffic jams are caused by merging mistakes and so on” it kinda implies that if everyone drove perfectly a highway lane could carry infinitely many cars. In actually a highway lane has a finite capacity determined by the length of the vehicles traveling on it, the length of the gap between them (indirectly determined by how fast they can start and stop), and the speed they’re moving.

    There are finite limits for gap widths and speed determined by physics and geometry. As the system approaches these limits it becomes less and less able to deal with small disruptions. In other words, as more cars move on a freeway a traffic jam becomes more and more likely. The small disruption which is perceived as the cause was really just the nucleation point for a phase change that the system was already poised to transition through. If it wasn’t that event then something else would trigger it.

    It is interesting to note that once a highway has transitioned from smooth flow to traffic jam its capacity is massively reduced, which you can see in the graphs in the above link. Another interesting thing to note is that the speed vs volume graph, if you flip it upside down, resembles a cost / demand curve from economics, where volume is the demand and time spent commuting (the inverse of speed) is cost. If you do this you see something quite odd, which is that the curve curls up around itself and goes backwards.

    This is less like a normal economic situation (the more people use a resource the more they have to pay, the less people use it the less they have to pay) and more like a massively multiplayer version of the prisoner’s dilemma. For awhile the cost increases only slightly with growing demand, until a certain threshold where each additional actor making a transaction has a chance to massively increase the cost for everyone, even if consumption is reduced. Actors can choose to voluntarily pay a higher time cost (wait before getting on the freeway) to avoid this, but again, it’s the prisoners dilemma. People can just go, trigger a traffic jam anyway, and you’ll still have to sit through it + all the time you waited trying to prevent it.

    Self driving cars are often described as a way to eliminate traffic jams, but they don’t change this fundamental property of how roadways work. It’s true that capacity could potentially be increased somewhat by decreasing the gap between cars, since machines have faster reflexes than humans (though I’m skeptical of how much the gap can really be decreased; is every car going to weigh the same at all times? Is every car going to have tires and brakes in identical conditions? Is the condition of the asphalt going to be identical at all times and across every part of the roadway? All of these things imply a great deal of variability in stopping distance, which implies a wide safety gap.), but the prisoner’s dilemma problem remains. The biggest thing that self driving cars could actually do to alleviate traffic jams would be to not enter a highway until traffic volumes were at a safe level. This can also be accomplished with a traffic volume sensor and a stop light on highway on-ramps.

    Of course trains, on top of having a way higher capacity than a highway lane, don’t suffer from any of this prisoner’s dilemma stuff. If a train car is full and you have to wait for the next one that’s equivalent to being stopped at a highway on ramp. People can’t force their way into a train and make it run slower for everyone (well, unless they do something really crazy like stand in the door and stop the train from leaving).



  • CRI is defined as how closely a light source matches the spectral emission of a thing glowing at a specific temperature. So, for a light source with a 4000 k color temperature its CRI describes how closely its emission matches that of an object that’s been heated to 4000 k.

    Because incandescent bulbs emit light by heating a filament by definition they will have 100 CRI and its impossible to get any better than that. But the emission curve of incandescent lights doesn’t actually resemble that of sunlight at all (sorry for the reddit link). The sun is much hotter than any incandescent bulb and it’s light is filtered by our atmosphere, resulting in a much flatter more gently sloping emission curve vs the incandescent curve which is extremely lopsided towards the red.

    As you can see in the above link, there are certain high end LED bulbs that do a much better job replicating noon day sunlight than incandescents. And that flatter emissions profile probably provides better color rendering (in terms of being able to distinguish one color from another) than the incandescent ramp.

    Now, whether or not you want your bulbs to look like the noon day sun is another matter. Maybe you don’t want to disrupt your sleep schedule and you’d much rather their emissions resemble the sunset or a campfire (though in that case many halogen and high output incandescent lamps don’t do a great job either). Or maybe you’re trying to treat seasonal depression and extra sunlight is exactly what you want. But in any case I think CRI isn’t a very useful unit (another reddit link).



  • Adding on to what GreyEyedGhost said, since the year 2000 the price of solar power (per watt) has fallen by more than 50x. Because of this huge drop in price the installed solar capacity has been doubling every 3 years. That means that in the time since 2020 we’ve built more solar capacity than we did in the previous 20 years combined.

    If that’s not good enough then idk. Imagine holding any other technology to that standard. The model T came out more than 100 years ago for an inflation adjusted price of $27,000 and with an MPG of 7.5. ICE cars today are better in a lot of other ways but they are not 50x cheaper and they are not 50x more fuel efficient than that.