I’m mainly curious about software developers here, or anyone else whose computer is somewhat central to their life, be it professional or hobbyist.

I only have two monitors—one directly in front of me, and another to the right of it, angled toward me. For web development, I keep my editor on the main screen, and anything auxiliary (be that a dev build, a video, StackOverflow, etc.) on the side screen.

I wouldn’t mind a third monitor, and if I had one, I’d definitely use it for log/output, since currently it’s a floating window that I shuffle around however necessary. It could be smaller than the other two, and I might even turn it vertical so I could split the screen between output and a terminal, configuring a AutoHotKey script to focus the terminal.

What about y’all?

[ cross-posted from: https://lemmy.world/post/13864053 ]

  • beeng@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    Yes, but yes.

    “What does x mean in the context of y?”

    “Make me a bash script that sends my ssh public key to the server ips I list in args >4”

    “draw me a mermaid diagram with 4 nodes, 1 with manual, next with automated, semi automated and lastly cicd”

    "write me a go function that ping’s these ips at a rate of 100 times per second and the json I reference with flag “–input”

    If you cannot find a way to do parts of your job without giving up sensitive ip, I guess that’s bad luck.

    • Vanth@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 months ago

      Difference between trusting a known individual and company-wide policies that have to account for new people, dumb people, honest mistakes.

      Do I know not to put company sensitive info into servers out of our control? Yes. Can the company trust that every employee knows that? Lol, no. Therefore, there’s blanket policies against using tools that require giving up ownership of data to god-knows-who.

      It’s wild to me any company of decent size isn’t locking this shit down. We’ll get our own internal generative AI tools where we control all the data before too long, or an enterprise service with specific data controls that comply with all the requirements of heavily regulated international industry.

      “Trust” only works in a very small circle. Corporation-wide, it’s no longer a question of if someone puts no-no data into ChatGPT, it’s a question of when and what’s the damage.