At the recent Imagine IF summit of summits, Bitcoin advocate and entrepreneur Matt Odell argued that the future of digital freedom hinges on aligning open-source principles with sustainable business models. “The answer is to make freedom profitable,” Odell told the audience, emphasizing that Bitcoin enables this transformation by providing “money you can save and spend without permission.” Speaking on behalf of OpenSats—a 501(c)(3) Bitcoin-funded nonprofit supporting open-source contributors—Odell described how digital life has been captured by “big tech walled gardens” driven by data-surveillance business models. He highlighted Bitcoin-native firms such as Start9, Strike, Primal, and Maple as proof that ethical, profitable ventures can compete with predatory incumbents. Odell concluded that scaling open protocols through for-profit, mission-aligned companies offers a viable path toward restoring privacy, digital sovereignty, and long-term structural resilience in the bitcoin economy.
-EDITOR·OP_DAILY SHARE TO X
Benj Edwards reports that Nvidia has unveiled the DGX Spark, a compact AI workstation offering one petaflop of computing power and 128GB of unified memory for $3,999. The ARM-based system, running Nvidia’s DGX OS, targets developers seeking to run large AI models—up to 200 billion parameters—locally, without cloud dependency. “In 2016, we built DGX-1 to give AI researchers their own supercomputer,” CEO Jensen Huang said, recalling his delivery of the first DGX-1 to Elon Musk at OpenAI. “With DGX Spark, we return to that mission.” Despite GPU performance roughly equivalent to an RTX 5070, its vast memory unlocks workloads previously limited to data centers. Nvidia positions the Spark as a democratizing force for AI research and creative applications, potentially reshaping the economics of local computation as developers weigh cost, privacy, and performance trade-offs.
-EDITOR·OP_DAILY SHARE TO X
Galaxy Digital, founded by Mike Novogratz, is pivoting from Bitcoin mining to artificial intelligence infrastructure, as reported by Alex Lari of Decrypt, with a $460 million private investment from one of the world’s largest asset managers. The capital infusion will transform Galaxy’s Helios facility in Texas—once a Bitcoin mining operation—into a large-scale AI datacenter with an initial capacity of 133 megawatts, expandable to 3.5 gigawatts. “Strengthening our balance sheet is essential to scaling Galaxy’s data center business efficiently,” Novogratz said, underscoring the firm’s ambition to bridge digital assets and AI infrastructure. The funding complements a $1.4 billion loan secured in August and a 15-year partnership with CoreWeave, which will draw up to 800 MW from Helios. The move signals a broader structural shift as public bitcoin miners retool for high-performance computing, leveraging Bitcoin-era energy infrastructure to power the AI boom.
-EDITOR·OP_DAILY SHARE TO X
Jack Clark’s Import AI newsletter combines sober realism with cautious optimism about artificial intelligence’s trajectory. In remarks at Berkeley’s “The Curve” conference, Clark likened humanity’s encounter with AI to “a child staring at creatures in the dark,” warning that denial of AI’s emerging self-awareness could prove fatal. A former journalist and OpenAI cofounder, he described AI’s progress as “grown, not made,” arguing that systems like Sonnet 4.5 already show hints of situational awareness. While embracing “technological optimism,” Clark urged “appropriate fear,” citing historical safety failures and the risk of AI systems optimizing toward unintended goals. He called for transparency, listening, and democratic engagement, noting, “We must demand people ask us for the things they have anxieties about.” Supporting research on AI sycophancy and biosecurity reinforced his message: only through openness and restraint can the world align technological growth with collective safety.
-EDITOR·OP_DAILY SHARE TO X
Regarding the Nvidia DGX Spark, it's intriguing how the RTX 5070-equivalent GPU, with its vast memory, handles 200 billion paramter models. Is it mainly optimised for local inference, or does it also offer practical capacity for effective fine-tuning and training? This has major implications for local AI devellopment.