“We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input.”

  • mryessir@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Doesn’t this mean the matrix film was right with their visualization (regardless of orientation)?

      • ChunkMcHorkle@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Nah, you’re good. For some of us it’s the first time seeing it, and I really liked reading what people far more familiar with this tech have to say about it here, including that it’s old news for them. So thank you, seriously, for posting it.

  • xia@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Which leads to the obvious question: how long has the military been able to do this?

  • webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    The reason this is good for privacy is because we don’t need a new smart device to send other (like video) data away for heavy processing in the cloud.

    We might instead be able to just update wifi routers to create local functionality.

    Then it will depend on the router and the user configuration what actually happens with that data.

    The user can have full local control…. Except neighbours wifi could be spying and so could someone with a wifi device in a car. But if putting a faraday cage around peoples exterior walls (combined with an underground physical cable to an internal extender for phone and roaming while insid) becomes a thing then it looks it could be a great system theoretically.

    Gotta wait to see how it might look in practice though.

  • SplashJackson@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    I heard that they can analyze window glass to get audio recordings because as a slow moving liquid, vibrations in the air leave an imprint on the glass.

    Sounds like Fringe Science to me, honestly

    • Peppycito@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      They bounce a laser off the glass and can hear what’s happening in the room by capturing the vibrations of the glass. I seem to remember the US embassy was caught spying on the Canadian Prime Minister using this technique.

    • bionicjoey@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      The “glass is a liquid” thing is a myth. The reason why old windows tended to be thicker at the bottom (which is usually the cited reason for this myth) is because windowpane making techniques weren’t very good and so they would always one thicker side. The builders would naturally install the pane with that side on the bottom because it was more stable. Glass doesn’t “flow” over time, it’s a solid crystalline material.

    • S_H_K@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Yes that’s possible and it has some years but there’s some conditions. The glass needs to have a sticker onto it for the trick AFAIK is to reflect a laser or infrared light onto it and track the less than milimétric vibrations of it and depending on where is the sticker and how big is the window it affects the quality of the recording. Given this wifi trick I think it would have to do with signals not passing through water clearly or some thing on the likes and more over it should need some specific equipment to pull out the trick, I don’t imagine you could just hack a router and patch surveillance in the firmware.

    • Mossy Feathers (She/They)@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 months ago

      Also VR nerds. Current tracking is either based on the headset, so you can’t move your arms unless the headset can see them, or your arms have to be seen by lighthouses, or you rely purely on gyroscope and accelerometers for tracking, which tend to drift. So either you have blind spots, have to deal with occlusion, or will slowly drift and have to recalibrate periodically. Wifi-based tracking seems like a neat idea tbh.

      Edit: considering wifi is just photons that aren’t wiggling fast enough for us to see, I’d be surprised if the government doesn’t already have this technology behind closed doors.

    • MNByChoice@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Article is from a year ago. Government tends to be ahead of the curve. As an uninformed guess, they have been using it in high value situations for 4+ years.

      (Dear FBI, the above is a guess based on public information. I don’t know shit.)

  • sundray@lemmus.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    I mean you can look through my walls if you want, but don’t come crying to me if you don’t like what you see.

    (I’m painting fantasy miniatures. They’re for a friend.)

  • Xavier@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Henceforth, the building code shall make mandatory that every room be perfectly grounded Faraday cages (/s).

    Still, imagine lethal drones integrated with that technology (of course, they already have infrared, maybe even some adequate wavelength of X-rays).

    Nevertheless, pretty cool to see how far we can take preexisting technology with the help of some deep learning layers.