Have you ever wondered why organizations go the extra mile to build data warehouses instead of relying on their operational databases for analytics? The answer lies in the unique capabilities of data warehouses. Unlike operational systems, which are optimized for transactions, data warehouses are purpose-built for decision support. They consolidate data from multiple sources, integrate it into a unified structure, and provide a historical perspective that enables in-depth analysis and strategic decision-making. Data warehouses empower businesses with online analytical processing (OLAP), supporting complex queries, multidimensional views, and advanced data mining functions like association, classification, and prediction. Whether it’s identifying trends, forecasting outcomes, or enhancing operational efficiency, data warehouses form the backbone of effective analytics strategies. If you’re curious to explore how data warehouses work and why they’re indispensable for modern enterprises, this comprehensive guide is for you. Join us with 137000+ Newsletter subscribers — https://lnkd.in/dxtrCMRF
Science-Based Decision Making
Explore top LinkedIn content from expert professionals.
-
-
This paper discusses the integration of human values into LLMs used in medical decision-making, highlighting the complexities and ethical considerations involved. 1️⃣ Human values, reflecting individual and societal principles, inherently influence AI models from their data selection to their deployment. 2️⃣ The incorporation of values in medical decision-making dates back to the 1950s, emphasizing the ongoing relevance of balancing probability and utility. 3️⃣ Training data can both reveal and amplify societal biases, affecting AI outputs in medical applications like radiography and dermatology. 4️⃣ A clinical example involving growth hormone treatment illustrates how values influence AI recommendations and decision-making among patients, doctors, and insurers. 5️⃣ LLMs can be tuned to reflect specific human values through supervised fine-tuning and reinforcement learning from human feedback, but this raises questions about whose values are represented. 6️⃣ Addressing value discrepancies, ensuring continuous retraining, and using population-level utility scores are vital for aligning AI with evolving human values. ✍🏻 Kun-Hsing Yu, Elizabeth Healey , Tze-Yun Leong, Isaac Kohane, Arjun Manrai. New England Journal of Medicine. May 30, 2024. DOI: 10.1056/NEJMra2214183
-
Picking the right experiment for your hypothesis can be difficult. If you lack an overview of your options. I got you covered 😉 I love - LOVE - the tools that Strategyzer provides with their books. In the book "Testing Business Ideas" David Bland and Alexander Osterwalder provide you a wide range of experiments. • They are sorted into "Discovery Experiments" and "Validation Experiments", i.e. "Learn & Confirm". • Each experiment has a 1-5 grading of cost, set up time, run time and evidence strengths. • Each experiment is categorised into whether it tests desirability, feasibility or viability. 𝗔𝗿𝗲 𝘆𝗼𝘂 𝗮 𝘃𝗶𝘀𝘂𝗮𝗹 𝗽𝗲𝗿𝘀𝗼𝗻? I am! When I teach or train product managers on experimentation and evidence gathering for a hypothesis based product management approach, or for increasing the confidence level in prioritisation, depending on the case I use one of these 𝘁𝗵𝗿𝗲𝗲 𝘃𝗶𝘀𝘂𝗮𝗹𝘀: 💠 Image 1: Discovery experiments. Content source: "Testing Business Ideas" by Bland & Osterwalder; visualisation by me 💠 Image 2: Validation experiments. Content source: "Testing Business Ideas" by Bland & Osterwalder; visualisation by me 💠 Image 3: General overview of different types of experiments. Content source: Too many 😅; visualisation by me 𝗘𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁 𝗰𝗵𝗮𝗶𝗻𝘀: Test 1 hypothesis with multiple experiments if the hypothesis is risky. Start with quick and cheap discovery and validation experiments, then run more and more precise ones with a higher level of evidence strengths. For not so risky hypotheses you check what else you need to understand if it's a good idea or not, and only run experiments to learn those points. For not risky hypotheses: Just do it and analyse 😉 Have you run experiment chains? Or only experimented on low level and were surprised that your idea failed? 😜 Share your learnings, I'm curious to know!
-
Many strategic plans appear sophisticated on the surface. The decks are polished, the goals are clear, and the timelines feel actionable. But too often, these plans are built on lagging data and internal assumptions rather than real-time insight. The result is a document that feels strategic, but in practice, offers very little clarity for decision-making in the moment. The real issue is not the absence of planning. The issue is building plans in isolation from the present. When executives are making decisions based on data that is a week or a quarter old, they are not operating with the full picture. Planning without a live connection to the business reality leads to misaligned budgets, missed forecasts, and confusion across departments. It becomes incredibly difficult to adapt when there is no timely feedback guiding the next move. If the information driving your strategy is always behind the curve, your decisions will be as well. The organizations that lead effectively today are not necessarily the ones with the most experience or the biggest budgets. They are the ones with the clearest view of what is actually happening in their business, right now. And that clarity enables faster action, better alignment, and more resilient planning. #StrategicPlanning #DecisionMaking #BusinessIntelligence #ExecutiveLeadership #RealTimeData #PlanningCulture #OperationsStrategy
-
A must-read study in JAMA Network Open just compared a traditional diagnostic decision support system (DDSS), DXplain, with two large language models, ChatGPT-4 (LLM1) and Gemini 1.5 (LLM2), using 36 unpublished complex clinical cases. Key Findings: - When lab data was excluded, DDSS outperformed both LLMs: 56% vs. 42% (LLM1) and 39% (LLM2) in listing the correct diagnosis. - When lab data was included, performance improved for all: DDSS (72%), LLM1 (64%), LLM2 (58%). - Importantly, each system captured diagnoses that the others missed, indicating potential synergy between expert systems and LLMs. While DDSS still leads, the exponential improvement in #LLMs cannot be ignored. The study presents a compelling case for hybrid approaches—combining deterministic rule-based systems with the linguistic and contextual fluency of LLMs, while also incorporating structured data with standardized coding, such as LOINC codes and SNOMED International..etc The inclusion of structured data significantly enhanced diagnostic accuracy across the board. This validates the notion that structured and unstructured data must collaborate, not compete, to deliver better #CDS outcomes. #HealthcareonLinkedin #Datascience #ClinicalInformatics #HealthIT #AI #GenAI #ClinicalDecisionSupport
-
Mathematicians Solve 125-Year-Old Problem, Uniting Key Laws of Physics For over a century, physicists and mathematicians have sought a single mathematical framework to describe the motion of fluids and the behavior of individual particles within them. Now, a team led by Zaher Hani at the University of Michigan has solved this problem, fulfilling one of David Hilbert’s famous 1900 challenges and providing deeper insight into the physics of the atmosphere, oceans, and plasma flows. The Challenge: Unifying Three Scales of Motion • Microscopic Scale: Individual particles follow Newton’s laws, colliding and interacting randomly. • Mesoscopic Scale: As particle interactions grow, their behavior follows statistical mechanics—the foundation of kinetic gas theory. • Macroscopic Scale: When particle numbers become immense, they obey fluid dynamics, governed by the famous Navier-Stokes equations. The problem has been stitching these descriptions together—ensuring that the transition from particles to gases to fluids happens in a mathematically consistent way. The Breakthrough Hani and his team found a way to derive fluid equations from particle behavior by carefully analyzing the interactions between microscopic chaos and large-scale stability. Their framework formally links: • Newton’s Laws → Boltzmann Equation (kinetic theory) → Fluid Dynamics (Navier-Stokes) • This provides mathematical rigor to a process physicists long suspected but couldn’t prove. Why This Matters • Better Climate Models: This solution enhances our ability to simulate the atmosphere and ocean currents, leading to more accurate weather forecasts and climate predictions. • Plasma & Fusion Research: Understanding how charged particles behave at different scales is key to nuclear fusion and space physics. • Bridging Physics & Math: This result unifies major physical laws, bringing us closer to a universal mathematical framework for fluid motion. Physicists previously believed this problem was beyond reach, making this achievement a landmark in mathematical physics—one that could reshape our understanding of fluids in nature and technology.
-
Many CRO/experimentation programmes are an exercise in cycling through random ideas and best practices to see if any move a metric. There is nothing wrong with this per se, and in fact, some very interesting things can come from such accidental finds. But are you really learning anything, or growing and innovating your business strategically? Think about the Apollo moon missions, arguably the greatest of all human achievements. This succeeded not because of some individual experiments on rocket propulsion or anything else, but because of the genius in its systems management and coordinated learning and development. Imagine if they had just told everyone to test a load of ideas and see what happened. True experimentation begins with strategy and theory, and seeks to use experimentation to develop those theories and make bigger strategic decisions and pivots. The example shown is a real example for a retail brand. They wanted to appeal to a younger demographic and did not know how this could be achieved. The chart demonstrates the process of critical thinking that breaks down this challenge, first into strategic hypotheses (that this would be achieved either through changes to product OR to media and targeting) and then into more testable functional hypotheses and then experiments. These experiments are not necessarily A/B tests and run across the whole business. Experimentation is a strategic methodology for innovation and growth, but it requires careful centralised management, coordination and operating systems. Many businesses think they 'tick the box' of experimentation because they have an A/B testing platform and someone in the corner of a marketing team with a login for it. This is not experimentation, and you will not learn or grow with this approach. #experimentation #cro #productmanagement #growth #digitalexperience #experimentationledgrowth #elg #growthexperimentation
-
Decision-making is a necessity in almost every aspect of daily life. However, making sound decisions becomes particularly challenging when the stakes are high and numerous complex factors need to be considered. In this blog post, written by The New York Times (NYT) team, they share insights on leveraging the Analytic Hierarchy Process (AHP) to enhance decision-making. At its core, AHP is a decision-making tool that simplifies complex problems by breaking them down into smaller, more manageable components. For instance, the team faced the task of selecting a privacy-friendly canonical ID to represent users. Let's delve into how AHP was applied in this scenario: -- The initial step involves decomposing the decision problem into a hierarchy of more easily comprehensible sub-problems, each of which can be independently analyzed. The team identified criteria impacting the choice of the canonical ID, such as Database Support and Developer User Experience. Each alternative canonical ID choice was assessed based on its performance against these criteria. -- Once the hierarchy is established, decision-makers evaluate its various elements by comparing them pairwise. For instance, the team found a consensus that "Developer UX is moderately more important than database support." AHP translates these evaluations into numerical values, enabling comprehensive processing and comparison across the entire problem domain. -- In the final phase, numerical priorities are computed for each decision alternative, representing their relative ability to achieve the decision goal. This allows for a straightforward assessment of the available courses of action. The team found leveraging AHP proved to be highly successful: the process provided an opportunity to meticulously examine criteria and options, and gain deeper insights into the features and trade-offs of each option. This framework can serve as a valuable toolkit for those facing similar decision-making challenges. #analytics #datascience #algorithm #insight #decisionmaking #ahp – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Spotify: https://lnkd.in/gKgaMvbh https://lnkd.in/gzaZjYi7
-
Paper sharing- AI in Science Discovery & Product Innovation MIT researchers expanded on applying AI-driven discovery to material science, which is making discoveries happen faster than ever! What they did was introduce an AI tool (similar to the "AI Scientist" from Sakana AI) for materials discovery to 1,800 scientists in the R&D lab of a large U.S. firm. Traditionally, scientific discovery is labor-intensive and manual—a process full of trial and error, where scientists conceptualize various potential structures and then test their properties. Here’s how AI tackles it: AI generates ideas, prioritizes promising materials, tests them, and iterates on any false positives, refining until it finds viable options. Once validated, these materials can be patented and commercialized. This entire process runs much faster, and the impact is striking. Researchers with AI assistance have discovered 44% more materials, filed 39% more patents, and seen a 177% jump in downstream product innovation. Interestingly, the benefits are more unequally divided than we might have assumed. Top researchers nearly doubled their output, while the bottom third saw little improvement. This divide is partly because AI automates 57% of idea generation, allowing top scientists to focus on testing rather than preliminary research. Another downside is that 82% of scientists reported feeling less fulfilled, citing reduced creativity and underutilized skills. References/ Paper: https://lnkd.in/g3sZdAbJ __________________ I share my learning journey here. Join me and let's grow together. For more on AI and learning materials, please check my previous posts. Alex Wang
-
We’ve seen exciting progress in reasoning models this week. We are thrilled to share Graph-PReFLexOR, a model designed for graph-native reasoning in science and engineering: 1️⃣ Graph Representation: Maps the task into a structured graph. 2️⃣ Abstraction: Derives higher-level insights and symbolic representations, enabling the model to generalize better by abstracting problems into shared representations, finding mappings between seemingly different concepts. 3️⃣ Analysis: Investigates design principles, mechanisms, and hypotheses. 4️⃣ Reflection: Delivers thoughtful solutions through deep reasoning. What sets this model apart is its ability to autonomously grow knowledge graphs. This unique capability, termed “Growing a Knowledge Garden🌱”, fosters the creation of synthetic ecosystems of ideas, concepts, abstractions, and relationships. It opens doors to endless possibilities—from answering scientific questions to generating creative, multidisciplinary hypotheses and anticipated behaviors. While Graph-PReFLexOR focuses on scientific reasoning and design, its approach is adaptable to many other domains. We hope this idea inspires further progress in the field!