Here’s the entire thing if you don’t want to go to that link:

There were a series of accusations about our company last August from a former employee. Immediately following these accusations, LMG hired Roper Greyell - a large Vancouver-based law firm specializing in labor and employment law, to conduct a third-party investigation. Their website describes them as “one of the largest employment and labour law firms in Western Canada.” They work with both private and public sector employers.

To ensure a fair investigation, LMG did not comment or publicly release any data and asked our team members to do the same. Now that the investigation is complete, we’re able to provide a summary of the findings.

The investigation found that:

  • Claims of bullying and harassment were not substantiated.

  • Allegations that sexual harassment were ignored or not addressed were false.

  • Any concerns that were raised were investigated. Furthermore, from reviewing our history, the investigator is confident that if any other concerns had been raised, we would have investigated them.

  • There was no evidence of “abuse of power” or retaliation. The individual involved may not have agreed with our decisions or performance feedback, but our actions were for legitimate work-related purposes, and our business reasons were valid.

  • Allegations of process errors and miscommunication while onboarding this individual were partially substantiated, but the investigator found ample documentary evidence of LMG working to rectify the errors and the individual being treated generously and respectfully. When they had questions, they were responded to and addressed.

In summary, as confirmed by the investigation, the allegations made against the team were largely unfounded, misleading, and unfair.

With all of that said, in the spirit of ongoing improvement, the investigator shared their general recommendation that fast-growing workplaces should invest in continuing professional development. The investigator encouraged us to provide further training to our team about how to raise concerns to reinforce our existing workplace policies.

Prior to receiving this report, LMG solicited anonymous feedback from the team in an effort to ensure there was no unreported bullying and harassment and hosted a training session which reiterated our workplace policies and reinforced our reporting structure. LMG will continue to assess ongoing continuing education for our team.

At this time, we feel our case for a defamation suit would be very strong; however, our deepest wish is to simply put all of this behind us. We hope that will be the case, given the investigator’s clear findings that the allegations made online were misrepresentations of what actually occurred. We will continue to assess if there is persistent reputational damage or further defamation.

This doesn’t mean our company is perfect and our journey is over. We are continuously learning and trying to do better. Thank you all for being part of our community.

  • TagMeInSkipIGotThis@lemmy.nz
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    I just want to jump in here as the whole thing about the tonnes of factual errors stuff…

    A lot of the allegations about the accuracy of their data basically came down to arguments about the validity of statistics garnered from testing methodology; and how Labs guy claimed their methods were super good, vs other content creators claiming their methods were better.

    My opinion is that all of these benchmarking content creators who base their content on rigorous “testing” are full of their own hot air.

    None of them are doing sampling and testing in volume enough to be able to point to any given number and say that it is the metric for a given model of hardware. So the value reduces to this particular device performed better or worse than these other devices at this point in time doing a comparable test on our specific hardware, with our specific software installation, using the electricity supply we have at the ambient temperatures we tested at.

    Its marginally useful for a product buying general comparison - in my opinion to only a limited degree; because they just aren’t testing in enough volume to get past the lottery of tolerances this gear is released under. Anyone claiming that its the performance number to expect is just full of it. Benchmarking presents like it has scientific objectivity but there are way too many variables between any given test run that none of these folks isolate before putting their videos up.

    Should LTT have been better at not putting up numbers they could have known were wrong? Sure! Should they have corrected sooner & clearer when they knew they were wrong? Absolutely! Does anybody have a perfect testing methodology that produces reliable metrics - ahhhh, im not so sure. Was it a really bitchy beat up at the time from someone with an axe to grind? In my opinion, hell yes.

    • Synthuir@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      A lot of the allegations about the accuracy of their data basically came down to arguments about the validity of statistics garnered from testing methodology…

      I mean, no, not really. They mislabelled graphs entirely, let data that was supposedly comparing components in the same benchmark, by the same testers, on the same platform pass with incredible outliers, and just incorrectly posted specs of components, and that’s nothing to say about any of the other allegations brought up at that time. It’s super basic proofreading stuff, not methodology, that they couldn’t be assed to double-check, all because of crunch.

    • KeenSnappersDontCome@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      There have been a few videos by hardware reviewers to address the sample size concern. Gamers Nexus tested 3 different CPU models with 20+ CPUs each and found that the biggest variance from lowest to highest performance was under 4% while performance variance in most cases was about 2%

      https://www.youtube.com/watch?v=PUeZQ3pky-w

      The way CPU manufacturing and binning are done means that cpus in particular will have very minor differences within the same model number.