Trust in closed source systems declining

Explore top LinkedIn content from expert professionals.

Summary

Trust in closed-source systems—where users cannot see or verify the code that runs software—has been declining, especially as concerns grow about privacy, data ownership, and accountability within technology companies. This shift reflects increased public demand for transparency, control, and ethical practices in how advanced technologies like AI are developed and managed.

  • Prioritize transparency: Clearly communicate how your systems manage data and provide straightforward explanations whenever policies change to reassure your users.
  • Empower user control: Give users simple options to manage their data and privacy preferences directly, building confidence in your platform.
  • Build trust through openness: Consider adopting open-source or community-driven approaches to show your commitment to accountability and invite public scrutiny.
Summarized by AI based on LinkedIn member posts
  • View profile for Dion Wiggins

    CTO at Omniscien Technologies | Board Member | Strategic Advisor | Consultant | Author

    11,637 followers

    Trust Betrayed. Again. Anthropic—the company that branded itself as “privacy-first” and “safety-driven”—just torched its own moat. Starting now, Claude will train on your chat transcripts and coding sessions unless you manually opt out by September 28. Five years of storage replaces the old 30-day deletion rule. Free, Pro, Max, Claude Code—no exceptions. This is not an update. It is a betrayal. → Hypocrisy laid bare: The self-proclaimed “responsible” AI company now runs the same playbook as the rest—harvest first, ask forgiveness later. → Compliance nightmare: Sensitive conversations, contracts, legal docs, and code can now sit in Anthropic’s servers for half a decade. Opt-out ≠ consent. → Structural exposure: For governments and enterprises that bought Claude for its privacy promises, the foundation just cracked. → Pattern confirmed: In the end, every closed model company caves to the same growth imperative: extract more data, hold it longer, and lock users in. The last fig leaf of “privacy-first AI” has fallen. The message is simple: sovereignty and control cannot be outsourced. The question for every policymaker, CIO, and enterprise is now clear: how many more times will you let “responsible AI” vendors betray your trust before you build systems you truly control? https://lnkd.in/gm2J-T6h

  • View profile for Graham Cooke

    Founder & CEO, Brava Finance — The AI platform turning stablecoins into the next generation of credit markets | Author | Ex-Google | Exited Founder | NED

    14,725 followers

    OpenAI's closed-source approach puts them on the wrong side of history. Here's why the future of AI must be open and transparent: The battle between open and closed systems defines tech history. In each case, closed systems dominated early but open systems won in the end: • CompuServe vs Internet • Windows vs Linux • iOS vs Android Why? Because open systems harness collective intelligence: Thousands of developers spot bugs faster than any single company. Security issues get fixed quickly. Innovation accelerates exponentially. The cost of development plummets. But with AI, the stakes are higher than ever. Imagine AI making life-or-death decisions: • Medical diagnoses • Self-driving cars • Financial systems • Military applications Would you trust a system you can't inspect? This is why OpenAI's transformation is concerning. In 2015, they launched with a clear mission: Create AI that benefits humanity through open-source development. By 2019, everything changed: • Shifted to "capped-profit" model • Took $1B from Microsoft • Went closed source This betrayal of principles led to Musk's lawsuit in 2024. But this isn't about personal drama. It's about the future of humanity's relationship with AI. Open source creates trust through transparency. When code is visible, you can verify what it does. With closed systems, you're forced to trust black boxes. The pattern throughout tech history is clear: • Open systems start slower • But they win in the end • They harness humanity's collective intelligence • They build trust through transparency We're at a pivotal moment in AI development. Will it be controlled by a few companies? Or will it be open, transparent, and community-driven? History suggests the winner is clear. The question is: Which side of history will you be on? --- I'm Graham. Former Google employee who built $2B+ revenue products. Author of "Web3: The End of Business as Usual." Currently building bravaxyz to make blockchain technology accessible to billions. Follow for more insights on AI, blockchain, and the future of technology.

Explore categories