• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: July 25th, 2023

help-circle


  • Perhaps we’re talking to different points. Parent comment said that investors are always looking for better and better returns. You said that’s how progress works. This sentiment is was my quibble.

    I took the “investors are always looking for better returns” to mean “unethically so” and was more talking about what happens long term. Reading your above I think you might have been talking about good faith.

    In a sound system that’s how things work, sure! The company gets investment into tech and continue to improve and the investors get to enjoy the progress’s returns.


  • You’re conflating creating dollar value with progress. Yes the technology moves the total net productivity of humankind forward.

    Investing exists because we want to incentive that. Currently you and the thread above are describing bad actors coming in, seeing this small single digit productivity increase and misrepresenting it so that other investors buy in. Then dipping and causing the bubble to burst.

    Something isn’t a ‘good’ investment just because it makes you 600% return. I could go rob someone if I wanted that return. Hell even if then killed that person by accident the net negative to human productivity would be less.

    These bubbles unsettle homes, jobs, markets, and educations. Inefficiency that makes money for anyone in the stock market should have been crushed out.


  • I don’t disagree with anything you said but wanted to just weigh in on the more degrees of freedom.

    One major thing to consider is that unless we have 24/7 sensor recording with AI out in the real world and a continuous monitoring of sensor/equipment health, we’re not going to have the “real” data that the AI triggered on.

    Version and model updates will also likely continue to cause drift unless managed through some sort of central distribution service.

    Any large Corp will have this organization and review or are in the process of figuring it out. Small NFT/Crypto bros that jump to AI will not.

    IMO the space will either head towards larger AI ensembles that tries to understand where an exact rubric is applied vs more AGI human reasoning. Or we’ll have to rethink the nuances of our train test and how humans use language to interact with others vs understand the world (we all speak the same language as someone else but there’s still a ton of inefficiency)