Big breakthrough: A few months my lab at MIT introduced SPARKS, our autonomous scientific discovery model. Since then we have demonstrated applicability to broad problem spaces across domains from proteins, bio-inspired materials to inorganic materials. SPARKS learns by doing, thinks by critiquing itself & creates knowledge through recursive interaction; not just with data, but with the physical & logical consequences of its own ideas. It closes the entire scientific loop - hypothesis generation, data retrieval, coding, simulation, critique, refinement, & detailed manuscript drafting - without prompts, manual tuning, or human oversight. SPARKS is fundamentally different from frontier models. While models like o3-pro and o3 deep research can produce summaries, they stop short of full discovery. SPARKS conducts the entire scientific process autonomously, generating & validating falsifiable hypotheses, interpreting results & refining its approach until a reproducible, fully validated evidence-based discovery emerges. This is the first time we've seen AI discover new science. SPARKS is orders of magnitude more capable than frontier models & even when comparing just the writing, SPARKS still outperforms: in our benchmark evaluation, it scored 1.6× higher than o3-pro and over 2.5× higher than o3 deep research - not because it writes more, but because it writes with purpose, grounded in original, validated compositional reasoning from start to finish. We benchmarked SPARKS on several case studies, where it uncovered two previously unknown protein design rules: 1⃣ Length-dependent mechanical crossover β-sheet-rich peptides outperform α-helices—but only once chains exceed ~80 amino acids. Below that, helices dominate. No prior systematic study had exposed this crossover, leaving protein designers without a quantitative rule for sizing sheet-rich materials. This discovery resolves a long-standing ambiguity in molecular design and provides a principle to guide the structural tuning of biomaterials and protein-based nanodevices based on mechanical strength. 2⃣ A stability “frustration zone” At intermediate lengths (~50- 70 residues) with balanced α/β content, peptide stability becomes highly variable. Sparks mapped this volatile region and explained its cause: competing folding nuclei and exposed edge strands that destabilize structure. This insight pinpoints a failure regime in protein design where instability arises not from randomness, but from well-defined physical constraints, giving designers new levers to avoid brittle configurations or engineer around them. This gives engineers and biologists a roadmap for avoiding stability traps in de novo design - especially when exploring hybrid motifs. Stay tuned for more updates & examples, papers and more details.
Science
Explore top LinkedIn content from expert professionals.
-
-
We’re planting trees — but losing biodiversity. Global efforts to restore forests are gathering pace, driven by promises of combating climate change, conserving biodiversity, and improving livelihoods. Yet a recent paper published in Nature Reviews Biodiversity warns that the biodiversity gains from these initiatives are often overstated — and sometimes absent altogether. Forest restoration is at the heart of Target 2 of the Kunming-Montreal Global Biodiversity Framework, which aims to place 30% of degraded ecosystems under effective restoration by 2030. But the gap between ambition and outcome is wide. "Biodiversity will remain a vague buzzword rather than an actual outcome" unless projects explicitly prioritize it, the authors caution. Restoration has typically prioritized utilitarian goals such as timber production, carbon sequestration, or erosion control. This bias is reflected in the widespread use of monoculture plantations or low-diversity agroforests. Nearly half of the Bonn Challenge’s forest commitments consist of commercial plantations of exotic species — a trend that risks undermining biodiversity rather than enhancing it. Scientific evidence shows that restoring biodiversity requires more than planting trees. Methods like natural regeneration — allowing forests to recover on their own — can often yield superior biodiversity outcomes, though they face social and economic barriers. By contrast, planting a few fast-growing species may sequester carbon quickly but offers little for threatened plants and animals. Biodiversity recovery is influenced by many factors: the intensity of prior land use, the surrounding landscape, and the species chosen for restoration. Recovery is slow — often measured in decades — and tends to lag for rare and specialist species. Alarmingly, most projects stop monitoring after just a few years, long before ecosystems stabilize. However, the authors say there are reasons for optimism. Biodiversity markets, including emerging biodiversity credit schemes and carbon credits with biodiversity safeguards, could mobilize new financing. Meanwhile, technologies like environmental DNA sampling, bioacoustics, and remote sensing promise to improve monitoring at scale. To turn good intentions into reality, the paper argues, projects must define explicit biodiversity goals, select suitable methods, and commit to long-term monitoring. Social equity must also be central. "Improving biodiversity outcomes of forest restoration… could contribute to mitigating power asymmetries and inequalities," the authors write, citing examples from Madagascar and Brazil. If designed well, forest restoration could help address the twin crises of biodiversity loss and climate change. But without a deliberate shift, billions of dollars risk being spent on projects that plant trees — and little else. 🔬 Brancalion et al (2025): https://lnkd.in/gG6X36WP
-
If your paper is getting rejected, it isn’t necessarily the science that’s the problem (it’s likely the journal fit that’s off!). Here’s how you can be be strategic about journal selection. How do I choose the right scientific journal? ↳ Analyze your citation list and target relevant publications. Can impact factor really determine journal quality? ↳ Look beyond numbers, focus on specialized audience fit. How to avoid predatory journal publication traps? ↳ Verify journal reputation before submitting your research. Will editors help improve my manuscript? ↳ Follow author guidelines meticulously. Navigating the academic publication landscape can feel like traversing a complex maze. As a professor, I've learned that selecting the right journal is both an art and a science. Here's a game-changing approach I've developed: 1. Conduct a citation audit: Count journals you've referenced most frequently. These are likely your ideal publication targets. 2. Beyond Impact Factor: Don't get fixated on numbers. A lower-ranked journal with a specialized audience might be more valuable than a high-impact generic publication. 3. Beware of predatory journals: If an unsolicited email promises quick publication for a fee, run! Legitimate open-access journals conduct rigorous peer review. 4. Craft a strategic cover letter: Suggest credible reviewers, highlight your paper's novelty, and demonstrate professionalism. 5. Patience is key: Most journals reject approximately 50% of submissions. Don't be discouraged - each submission is a learning opportunity. Pro tip: Always read and follow the journal's specific author guidelines. This shows you're a detail-oriented, professional researcher. Have you ever struggled with selecting the right scientific journal for your research? What challenges have you encountered? #science #scientist #ScientificCommunication #publishing #phd #professor #research #postgraduate
-
🚀 Now publicly available 🚀 The Data Innovation Toolkit! And Repository! (✍️ coauthored with Maria Claudia Bodino, Nathan da Silva Carvalho, Marcelo Cogo, and Arianna Dafne Fini Storchi, and commissioned by the Digital Innovation Lab (iLab) of DG DIGIT at the European Commission) 👉 Despite the growing awareness about the value of data to address societal issues, the excitement around AI, and the potential for transformative insights, many organizations struggle to translate data into actionable strategies and meaningful innovations. 🔹 How can those working in the public interest better leverage data for the public good? 🔹 What practical resources can help navigate data innovation challenges? To bridge these gaps, we developed a practical and easy-to-use toolkit designed to support decision makers and public leaders managing data-driven initiatives. 🛠️ What’s inside the first version of the Digital Innovation Toolkit (105 pages)? 👉A repository of educational materials and best practices from the public sector, academia, NGOs, and think tanks. 👉 Practical resources to enhance data innovation efforts, including: ✅Checklists to ensure key aspects of data initiatives are properly assessed. ✅Interactive exercises to engage teams and build essential data skills. ✅Canvas models for structured planning and brainstorming. ✅Workshop templates to facilitate collaboration, ideation, and problem-solving. 🔍 How was the toolkit developed? 📚 Repository: Curated literature review and a user-friendly interface for easy access. 🎤 Interviews & Workshops: Direct engagement with public sector professionals to refine relevance. 🚀 Minimum Viable Product (MVP): Iterative development of an initial set of tools. 🧪 Usability Tests & Pilots: Ensuring functionality and user-friendliness. This is just the beginning! We’re excited to continue refining and expanding this toolkit to support data innovation across public administrations. 🔗 Check it out and let us know your thoughts: 💻 Data Innovation Toolkit: https://lnkd.in/e68kqmZn 💻 Data Innovation Repository: https://lnkd.in/eU-vZqdC #DataInnovation #PublicSector #DigitalTransformation #OpenData #AIforGood #GovTech #DataForPublicGood
-
My colleague Prof. Eleanor Maguire passed away this weekend after a long battle with cancer. Her contributions to #neuroscience have shaped how we understand memory and navigation, leaving a lasting legacy. One of Eleanor’s groundbreaking discoveries was that when a London taxi driver learns the 25,000 windy streets of London together with thousands of landmarks (collectively called “the Knowledge”), it physically changes their #brain. A part of the brain called the #hippocampus is important both for making new memories and for navigating one’s environment. For aspiring black cab drivers, learning the Knowledge pushes the hippocampus to adapt in remarkable ways. Eleanor and her colleagues used #MRI to measure the hippocampus in taxi drivers compared to a control group and discovered it was larger in the taxi drivers. In other words, London cabbies have special brains that are particularly well suited for their work. This raises a really interesting question: Are they born with a larger hippocampus and therefore better able to become taxi drivers or does learning the Knowledge change their brains? To answer this, Eleanor and her team ran a follow-up study where they followed 39 trainee taxi drivers from the beginning of their training to when qualified approximately 4 years later. Each received a brain scan at the beginning and end of their training. 👉 Before training, the aspiring taxi drivers showed no difference in hippocampus size compared to matched control volunteers. 👉 After training, the newly qualified taxi drivers were found to have larger hippocampi than they did 4 years ago and also larger than the control volunteers. In other words, even as an adult, learning the Knowledge has a strong effect on the brain that can be measured using MRI. Eleanor’s work has become one of the most well-known examples of #neuroplasticity, which is the brain’s remarkable ability to change and adapt throughout life. A few years ago, a group of students were visiting UCL’s Functional Imaging Lab. They had learned about her taxi study in their A-level psychology class so when they discovered that Eleanor worked there, there was a frenzy of excitement! They couldn’t believe that they got to meet the “Maguire” whose work they had read in school. It was absolutely charming! Although best known for work with taxi drivers, Eleanor made substantial contributions to memory and hippocampal function including: 👉 Discovering that patients with amnesia cannot imagine the future 👉 Showing that it is possible to decode individual memories by analysing patterns of activity in the hippocampus 👉 Clarifying the relation between memory for life episodes, the ability to imagine the future, and the ability to navigate spatial environments Eleanor’s work is a powerful reminder of the brain’s potential to adapt and grow throughout life. May her legacy inspire all of us to keep learning and exploring the frontiers of science.
-
🌍 We Can’t Afford to Get Climate Policy Wrong—A Look at the Data Behind What Really Works 🌍 In the race against time to combat climate change, bold promises are everywhere. But here’s the critical question: Are the policies being implemented actually reducing emissions at the scale we need? A groundbreaking study published in Science, cuts through the noise and delivers the insights we desperately need. Evaluating 1,500 climate policies from around the world, the research identifies the 63 most effective ones—policies that have delivered tangible, significant reductions in emissions. What’s striking is that the most successful strategies often involve combinations of policies, rather than single initiatives. Think of it as the ultimate teamwork: when policies like carbon pricing, renewable energy mandates, and efficiency standards are combined thoughtfully, the impact is far greater than any one policy could achieve on its own. It’s a powerful reminder that for climate solutions the whole is indeed greater than the sum of its parts. Moreover, the study’s use of counterfactual emissions pathways is a game changer. By showing what would have happened without these policies, it provides a clear, quantifiable measure of their effectiveness. This is exactly the kind of rigorous evaluation we need to ensure that every policy counts, especially when we’re working against the clock. If we’re serious about meeting the Paris Agreement’s targets, we need to focus on what works—and this research offers a clear roadmap. Let’s champion policies that have proven to make a difference, because we don’t have time to waste on anything less. 🔗 Full study in the comments #ClimateAction #Sustainability #PolicyEffectiveness #ParisAgreement #NetZero #ClimateScience
-
Text understanding with #LLMs is useful but not enough for scientific understanding and discovery. In chemistry, in addition to text, chemical structure is essential to determine the properties of molecules. We have created the first multimodal text-chemical structure model: MoleculeSTM. It has an aligned latent space of both modalities. This allows the users to provide free-form text instructions to create molecules with arbitrary sets of properties. This enables zero-shot text-guided molecule editing (lead optimization) without the need to fine-tune the model for each new specification. Paper: bit.ly/4736BPH Code: bit.ly/4877YOS The core idea of MoleculeSTM is to align the chemical structure and textual description modalities using contrastive pretraining. The pivotal advantage of such alignment is its capacity to introduce a new paradigm of LLM for drug discovery: by fully utilizing the open vocabulary and compositionality attributes of natural language. To adapt it to a more concrete task, we focus on zero-shot text-guided molecule editing (aka lead optimization). Existing ML-based molecule editing methods suffer from data insufficiency issues. MoleculeSTM circumvents this by formulating molecule editing as a natural language understanding and interpolation problem, which is much easier to solve under the zero-shot setting. Such a novel paradigm is meaningful for addressing more practical drug discovery challenges. We will have more follow-up works along this LLM for the molecule/drug discovery research line. Please stay tuned! Shengchao Liu Chaowei Xiao Weili Nie Zhuoran Qiao Caltech
-
Big milestone for industrial decarbonisation: BASF has started building one of the world’s largest industrial heat pumps at its Ludwigshafen site to produce low‑carbon steam for formic acid production. I told Chemistry World: “BASF is the largest chemical company in the world. [Putting its] weight behind heat pumps at this scale signals confidence in the technology in the chemicals sector and beyond.” With 50 MW thermal output and up to 500,000 tonnes of steam per year, BASF expects up to 98% emissions reductions for the steam it supplies—around 100,000 tonnes CO2 annually. The system will recover waste heat from a steam cracker and run on renewable electricity, backed by Germany’s Carbon Contracts for Difference. Article by Khushleen Kaur here https://lnkd.in/gZcv7E3n
-
Had to share the one prompt that has transformed how I approach AI research. 📌 Save this post. Don’t just ask for point-in-time data like a junior PM. Instead, build in more temporal context through systematic data collection over time. Use this prompt to become a superforecaster with the help of AI. Great for product ideation, competitive research, finance, investing, etc. ⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰ TIME MACHINE PROMPT: Execute longitudinal analysis on [TOPIC]. First, establish baseline parameters: define the standard refresh interval for this domain based on market dynamics (enterprise adoption cycles, regulatory changes, technology maturity curves). For example, AI refresh cycle may be two weeks, clothing may be 3 months, construction may be 2 years. Calculate n=3 data points spanning 2 full cycles. For each time period, collect: (1) quantitative metrics (adoption rates, market share, pricing models), (2) qualitative factors (user sentiment, competitive positioning, external catalysts), (3) ecosystem dependencies (infrastructure requirements, complementary products, capital climate, regulatory environment). Structure output as: Current State Analysis → T-1 Comparative Analysis → T-2 Historical Baseline → Delta Analysis with statistical significance → Trajectory Modeling with confidence intervals across each prediction. Include data sources. ⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰
-
🚀 Meet RAVEN: The Flying Robot That Walks, Jumps, and Soars 🦅 Drones are clumsy. They need open space, stable launch points, and struggle with rough terrain. Birds, on the other hand, dominate both air and land. That’s exactly what researchers at EPFL’s Laboratory of Intelligent Systems have captured in RAVEN—a robotic bird that walks, hops, jumps, and flies. 🔥 Inspired by ravens and crows, RAVEN’s multifunctional legs allow it to take off without a runway, land on rough surfaces, and even traverse obstacles that ground-based robots can’t handle. Traditional flying robots had to choose: either walk or fly—RAVEN does both. ✨ Why this matters: 🔹 Built for agility – It can jump-start its flight, making takeoff more energy-efficient. ⚡ 🔹 Nature’s blueprint, optimized – Lightweight avian-inspired legs mimic tendons and muscles. 🦵 🔹 Real-world impact – Imagine drones that can land in disaster zones, navigate tight spaces, or deliver aid without human intervention. 🎯 The future of robotics isn’t about copying nature—it’s about surpassing it. RAVEN isn’t just a flying robot. It’s a glimpse of what’s next: machines that move seamlessly across worlds, just like nature intended. 🌍✨ 🤔 What other real-world challenges do you think robots like RAVEN could help solve? Drop your thoughts below! ⬇️ #AI #Robotics #FlyingRobots #Drones #Innovation #FutureTech #Biomimicry #Aerospace #TechForGood