If a recording of someones very rare voice is representable by mp4 or whatever, could monkeys typing out code randomly exactly reproduce their exact timbre+tone+overall sound?

I don’t get how we can get rocks to think + exactly transcribe reality in the ways they do!

  • Bytemeister@lemmy.world
    link
    fedilink
    Ελληνικά
    arrow-up
    0
    ·
    4 个月前

    It doesn’t get encoded in to plaintext. First, the microphone picks up the sounds, and outputs values for frequencies and intensities. Recording software takes those values, and compresses them down into binary data. Then that binary data is saved onto storage. Depending on your storage, it’s then stored magnetically (cassette, floppy, HDD) or as a “lockable” logic gate (USB, SSD) or as laser etched dots and dashes (CD/DVD)

    It’s not getting turned in to rocks, it’s getting written on media.

    Also, some number for scale…

    My computer has 3.5ghz processors. It can run 3.5 billion instructions every second. To put that in perspective, the smallest unit of time humans can perceive is ~13ms. That processor can run ~270,000 instructions in that time frame. Computers perform very simple tasks, extremely quickly, and it gives the impression of intelligence.

    • cheese_greater@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      4 个月前

      But how can it capture perfectly my exact voice or the exact timbre of whatever stuff is playing. Like, its mind-blowing to me and I have nothing i can analogize it to. Its incredible we can even take pictures with pixels, sound is just a whole notha level that astounds me

      • invertedspear@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        4 个月前

        Maybe it helps to know that it can’t perfectly capture your voice. It can get close enough no human can tell the difference, but it’s still not perfect. First of all it has a sampling rate. To make this more understandable let’s think of a sample rate of 1 sample per second. Think of two speakers playing at the same time. One is playing your favorite song, the other is playing the exact note of that some for one second each and only changing notes every second. It’s going to somewhat mimic your song, but it’s going to be terrible. Now imagine that second speaker makes 4 samples every second, now it’s playing your song a quarter of a second at a time. Sounds a lot more like your song, but in the same way stop motion looks a lot like movement but isn’t right. Note up that sample rate to 100s or thousands of samples a second, now you’re getting to the point you can’t tell the difference, but it still can’t be perfect, because it’s still based on a sample rate.

        If you can grok pictures from pixels, you can picture the same thing. If you averaged a picture out to one giant pixel, it’s unrecognizable, 4,8,16 pixels, maybe a simple icon starts to approximate into something recognizable. That little icon in your browser tab is usually 32 x 32 pixels. 1024 pixels total, and we barely consider that an image. It’s all about pixel count (sample rate). When you zoom in, you find that it’s not perfect, you always get to the point of individual pixels, unlike optical zoom where you can zoom almost indefinitely as long as you can collect enough light.

      • Bytemeister@lemmy.world
        link
        fedilink
        Ελληνικά
        arrow-up
        0
        ·
        4 个月前

        It’s doesn’t get your exact voice. Your speech gets compressed into digital “steps” that closely mimic the continuous “analog” output of your voice.

      • Barack_Embalmer@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 个月前

        Everything about the exact timbre of your voice is captured in the waveform that represents it. To the extent that the sampling rate and bit depth are good enough to mimic your actual voice without introducing digital artefacts (something analogous to a pixelated image) that’s all it takes to reproduce any sound with arbitrary precision.

        Timbre is the result of having a specific set of frequencies playing simultaneously, that is characteristic of the specific shape and material properties of the object vibrating (be it a guitar string, drum skin, or vocal chords).

        As for how multiple frequencies can “exist” simultaneously at a single instant in time, you might want to read up on Fourier’s theorem and watch 3Blue1Brown’s brilliant series on differential equations that explores Fourier series https://www.youtube.com/watch?v=spUNpyF58BY

    • AstralPath@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      4 个月前

      Its funny that human perception seems to be anecdotally tied to double digit milliseconds when if you ask any drummer or guitar player about input latency they’ll tell you that the absolute maximum round trip latency to be able to enjoy playing the instrument is in the range of 5ms.

      Only once latency dips under 5ms does it start feeling “right”. Personally, I groan when I have to use anything over 3ms with my guitar as the second I hit high tempos the latency is unbearable.

      Below 3ms it gets very hard to say that you can feel a difference.16th notes at 250bpm with 5ms latency has you approaching 10% of the note separation time. It’s 100% perceivable.