Science Data Visualization Methods

Explore top LinkedIn content from expert professionals.

  • View profile for Aiswarya Konavoor

    Founder Togo AI Labs | Passionate teacher | Researcher | Help people transition to AI and conduct AI research | GNN expert

    15,138 followers

    “Modelling Molecules as Graphs” I always thought maths and physics were easy to understand because I could visualise the concepts. I didn’t enjoy chemistry, especially organic chemistry. I found it hard to visualise the concepts and fit them into a mathematical and logical framework. In 2021, Science magazine featured “AI-powered protein structure prediction” as the scientific breakthrough of the year. I was researching Graph Neural Networks (GNNs) at that time as part of my masters thesis. I learned that GNNs can be used for classifying proteins based on their structure. This one concept laid the foundation for my thesis. I really enjoyed my research and liked learning a bit of chemistry for the first time in my life. I published and presented my work at conferences, and won the best research paper award. Understanding the structure of molecules is important in fields like chemistry and molecular biology. Traditionally, we visualize molecules in 3D, where atoms interact and form stable covalent bonds at specific distances. But there is a simpler way to look at this from a computational standpoint. We can model molecules as graphs. Imagine atoms as nodes and bonds as edges, turning complex 3D structures into easy-to-understand 2D graphs. This graph-based approach simplifies visualization, and we can use machine learning for deeper insights. By using GNNs, we can analyse and predict molecular properties and interactions more efficiently, making advancements in areas like computational chemistry and drug discovery. Here is a beautiful visualization showing how we can model molecules as graphs. My work led to collaborations with people from different parts of the world. Here, you can watch some of my lectures on GNN, which I created when Sreedath, Raj, and Rajat approached me to make a comprehensive GNN lecture series for Vizuara's YouTube channel. 1) Course structure introduction: https://lnkd.in/gBg_HxxB 2) Introduction to GNN, adjacency matrix and message aggregation: https://lnkd.in/gi5duhEs 3)Building a GCN from scratch https://lnkd.in/g8u5TqY5 4) Temporal Graph Networks (TGN) Theory and implementation https://lnkd.in/gGaz9m-3 5) Graph Convolutional Neural Networks explained with a numerical example https://lnkd.in/gYgJiaGm 6) Demystifying Graph Convolutional Network https://lnkd.in/g4FbaH-w I am actively working on various ML and GNN research projects at Togo AI Labs. If you are interested in collaboration, feel free to reach out to me.

  • View profile for Roberta Boscolo
    Roberta Boscolo Roberta Boscolo is an Influencer

    Climate & Energy Leader at WMO | Earthshot Prize Advisor | Board Member | Climate Risks & Energy Transition Expert

    164,812 followers

    🌎 Seeing the Invisible... NASA - National Aeronautics and Space Administration's state-of-the-art visualization, leveraging the Goddard Earth Observing System Composition Forecasting (GEOS-CF) system, brilliantly illustrates the movement and concentration of critical air pollutants: 🔹 Particulate Matter (PM2.5) 🔹 Ozone (O₃) 🔹 Carbon Monoxide (CO) 🔹 Nitrogen Oxides (NOₓ) Monitoring these pollutants is crucial due to their profound impact on human health and the environment. Exposure to these air pollutants is linked to severe respiratory diseases, cardiovascular issues, and increased mortality rates. PM2.5, in particular, can penetrate deeply into the lungs and even enter the bloodstream, exacerbating conditions such as asthma and heart disease. Globally, this groundbreaking data supports: ✅ Improved local air quality predictions ✅ Enhanced understanding of health impacts from pollution ✅ Increased accuracy and reliability of satellite datasets By visualizing air pollution in real-time, we can better target interventions, shape informed policies, and safeguard public health, ultimately protecting vulnerable populations and promoting healthier communities. #NASA #AirQuality #SatelliteData #Sustainability #PublicHealth #ClimateScience

  • View profile for Matt Forrest
    Matt Forrest Matt Forrest is an Influencer

    🌎 Helping geospatial professionals grow using technology · Scaling geospatial at Wherobots

    72,224 followers

    🛰️ Working with raster and satellite data in Python? These essential libraries will help you streamline your workflows, process imagery, and perform advanced analysis: GDAL 🌍 – The industry standard for reading and writing a wide range of geospatial data formats, including raster data. Rasterio 🗺️ – Designed for reading and writing geospatial raster data, with easy integration into Python workflows. rioxarray 📊 – Makes working with labeled arrays of raster data simpler by extending xarray with geospatial support. SentinelHub-Py 🛰️ – Provides easy access to Sentinel satellite data, making it simple to download and process remote sensing imagery. xarray 📦 – A powerful library for multi-dimensional arrays, ideal for handling satellite and raster datasets with labeled dimensions. satpy ☁️ – Focused on meteorological satellite data processing, especially from platforms like GOES, METEOSAT, and JPSS. EarthPy 🌿 – Perfect for beginners, this library simplifies common tasks for working with Earth and remote sensing data. Apache Sedona 🔥 – A distributed system that provides scalable geospatial analytics, handling large raster and vector data with ease. geemap 🗺️ – Integrates with Google Earth Engine, making it easier to visualize and analyze satellite and remote sensing data in Jupyter notebooks. leafmap 🌍 – A tool for interactive mapping with minimal coding, perfect for visualizing large-scale geospatial data and time-series datasets. elevation 🏔️ – A simple and efficient way to download and work with elevation datasets in Python. EOmaps 🗾 – Enables easy access to Earth Observation data and facilitates interactive plotting of satellite imagery and geospatial data. eoreader 🛰️ – Simplifies downloading and processing Earth Observation data from a variety of sources like Sentinel, Landsat, and others. Forest-at-Risk 🌳 – A tool for analyzing deforestation and forest degradation using satellite imagery and remote sensing data. CoastSat 🏖️ – A tool that uses satellite imagery to extract and monitor changes along coastlines, valuable for coastal erosion studies. HyperCoast 🌊 – Designed to support coastal remote sensing, this library helps to process and analyze coastal environmental data. RasterFrames 🔧 – Brings the power of Apache Spark to raster data processing, allowing scalable geospatial raster analysis. sarpy 🌐 – A library specifically designed for Synthetic Aperture Radar (SAR) data processing, widely used in remote sensing. Whether you’re handling large-scale satellite data, coastal monitoring, or earth observation, these libraries will help you get the most out of your raster data analysis in Python. #gis #moderngis #geospatial #earthobservation #remotesensing #landsat #raster #spatialanalytics

  • View profile for Milan Janosov

    🌏 Founder @Geospatial Data Consulting | 🖥️ Data Scientist | 📖 #1 Best Seller Author on Amazon | 🎯 PhD in Network Science | 🎖️ Forbes 30u30 | 👨🏻🏫 LinkedIn Learning instructor

    81,203 followers

    An older animation of an unpublished project of mine just resurfaced - the NDVI time laps of Budapest. As for #NDVI: "𝘕𝘰𝘳𝘮𝘢𝘭𝘪𝘻𝘦𝘥 𝘥𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘤𝘦 𝘷𝘦𝘨𝘦𝘵𝘢𝘵𝘪𝘰𝘯 𝘪𝘯𝘥𝘦𝘹: 𝘛𝘩𝘦 𝘯𝘰𝘳𝘮𝘢𝘭𝘪𝘻𝘦𝘥 𝘥𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘤𝘦 𝘷𝘦𝘨𝘦𝘵𝘢𝘵𝘪𝘰𝘯 𝘪𝘯𝘥𝘦𝘹 𝘪𝘴 𝘢 𝘴𝘪𝘮𝘱𝘭𝘦 𝘨𝘳𝘢𝘱𝘩𝘪𝘤𝘢𝘭 𝘪𝘯𝘥𝘪𝘤𝘢𝘵𝘰𝘳 𝘵𝘩𝘢𝘵 𝘤𝘢𝘯 𝘣𝘦 𝘶𝘴𝘦𝘥 𝘵𝘰 𝘢𝘯𝘢𝘭𝘺𝘻𝘦 𝘳𝘦𝘮𝘰𝘵𝘦 𝘴𝘦𝘯𝘴𝘪𝘯𝘨 𝘮𝘦𝘢𝘴𝘶𝘳𝘦𝘮𝘦𝘯𝘵𝘴, 𝘰𝘧𝘵𝘦𝘯 𝘧𝘳𝘰𝘮 𝘢 𝘴𝘱𝘢𝘤𝘦 𝘱𝘭𝘢𝘵𝘧𝘰𝘳𝘮, 𝘢𝘴𝘴𝘦𝘴𝘴𝘪𝘯𝘨 𝘸𝘩𝘦𝘵𝘩𝘦𝘳 𝘰𝘳 𝘯𝘰𝘵 𝘵𝘩𝘦 𝘵𝘢𝘳𝘨𝘦𝘵 𝘣𝘦𝘪𝘯𝘨 𝘰𝘣𝘴𝘦𝘳𝘷𝘦𝘥 𝘤𝘰𝘯𝘵𝘢𝘪𝘯𝘴 𝘭𝘪𝘷𝘦 𝘨𝘳𝘦𝘦𝘯 𝘷𝘦𝘨𝘦𝘵𝘢𝘵𝘪𝘰𝘯." In practice, the NDVI is a +1.0 to -1.0 float-valued variable we can compute by combining a satellite image's red and near-infrared bands. This way, due to the light absorption properties of the green surfaces, the result will be higher values for those pixels that are greener. As I used free #sentinel satellite images, my pixel size here is 10x10m. On the visualization, I used a green-yellow-red color map to illustrate the NDVI values from +1.0 to 0. Negative values typically mean water bodies; near-zero positive values are good references for built-in areas, while values around 0.1-0.5 are sparse and, above that, dense green vegetation. Such maps, and especially the temporal evolution of the data behind them, can be a great tool for environmental monitoring, from detecting deforestation to assessing the green policies and efforts of city developers and governments. #GIS #spatialanalytics #geospatialdata #geospatial #datascience #datavisualization

  • View profile for Brent Dykes
    Brent Dykes Brent Dykes is an Influencer

    Author of Effective Data Storytelling | Founder + Chief Data Storyteller at AnalyticsHero, LLC | Forbes Contributor

    73,428 followers

    As I deliver #datastorytelling workshops to different organizations, I encounter a common misconception about how you should approach telling stories with data. To use a Lord of the Rings (LOTR) movie analogy, some #data professionals appear more focused on creating behind-the-scenes documentaries than actual narratives. They want to show the steps, methodologies, and approaches they used during their analysis rather than crafting a concise, compelling narrative. As a LOTR geek, I have watched many behind-the-scenes featurettes. However, I recognize that most people have only watched the LOTR movies and none of the documentaries. They're interested in compelling narratives--not the nitty-gritty of how the movies were made. When it comes to data stories, audiences are more interested in hearing an insightful narrative about a business problem or opportunity than an explanation of how you performed your analysis to assess the problem or opportunity. Taking a documentary approach with your data stories will introduce the following problems: ❌  Added complexity as you go into details that don’t matter to your audience (data collection/preparation, methodology, technical aspects, etc.). ❌  Loss of attention or interest as the audience waits to hear something meaningful. ❌  Less focused or clear communication as insights become buried in minutiae. ❌  Less time to discuss conclusions and determine next steps. ❌  Reduced actionability as extraneous details sidetrack the narrative and obscure the key takeaways. The only people who will get value from a behind-the-scenes documentary will be fellow data professionals. This is a much narrower audience than a broader business audience that is seeking insightful narratives about the business. I recommend delivering the narrative first and having your documentary ready in an appendix (if needed). Most of the time, no one will ask how you performed your analysis (unless they have questions about your numbers). With this approach, the audience will be focused on understanding your insight, implementing your recommendations, and taking action. That's a win-win. How do you avoid telling documentaries instead of narratives? 🔽 🔽 🔽 🔽 🔽 Craving more of my data storytelling, analytics, and data culture content? Sign up for my brand new newsletter today: https://lnkd.in/gRNMYJQ7

  • View profile for Leon Palafox
    Leon Palafox Leon Palafox is an Influencer

    Global AI & ML Leader | Creating Real-World Value with Large Language Models and Scalable Data Strategy

    29,188 followers

    Visualizing Uncertainty in Machine Learning with Gaussian Process Regression I've been reflecting on how Gaussian Process Regression (GPR) visualizations provide one of the most intuitive ways to understand uncertainty in machine learning models. What makes these visualizations so powerful is how they transform abstract statistical concepts into immediate visual insight: 🔍 Uncertainty as space: The confidence interval (typically shown as a shaded region) visually represents where the model believes the true function might lie. It's uncertainty made tangible. 📊 Data-driven confidence: Watching how uncertainty narrows precisely at locations where data exists, while remaining wide in unexplored regions, creates an immediate "aha!" moment about how models learn. 📈 Correlation intuition: Seeing how adding a single point affects predictions in neighboring regions helps build intuition about the fundamental concept of correlation in probabilistic models. 🧠 Prior knowledge visualization: GPR visualizations elegantly show how prior assumptions about smoothness and variation influence predictions in regions with sparse data. I find these visualizations particularly valuable when explaining complex concepts like Bayesian reasoning, active learning, and the exploration-exploitation tradeoff to stakeholders without technical backgrounds. What I appreciate most is how a simple curve with a shaded region conveys a sophisticated mathematical concept: that our models aren't just making predictions; they're expressing degrees of confidence that systematically decrease as we gather more evidence. Have you found other visualization approaches that make complex ML concepts more intuitive? I'd love to hear your thoughts! #MachineLearning #DataScience #Visualization #UncertaintyQuantification #GaussianProcesses #BayesianML

  • View profile for Kevin Hartman

    Associate Teaching Professor at the University of Notre Dame, Former Chief Analytics Strategist at Google, Author "Digital Marketing Analytics: In Theory And In Practice"

    23,981 followers

    Want to create better dataviz? Before you call your next data visualization complete, make sure its passes these three tests: 1. The Spartan Test: Strip it down. Ruthlessly assess every element in your chart. If removing something doesn’t change the message, it’s clutter. Clear visuals build trust — give your audience only what they need. 2. The Peek Test: Look away for 5 seconds, then glance back at your visual. Where does your eye go first? Chances are, that’s where your audience will focus too. Adjust until attention is drawn to the key insight. 3. The Colleague Test: Think it’s perfect? Share it with a colleague who hasn’t seen the analysis. Provide minimal context and give them 10-15 seconds to interpret. Ask what they take away — does it match your intent? Nail these three, and your data visualization will not just look good — it will communicate clearly and effectively. Three passing grades means it's ready to be presented. Art+Science Analytics Institute | University of Notre Dame | University of Notre Dame - Mendoza College of Business | University of Illinois Urbana-Champaign | University of Chicago | D'Amore-McKim School of Business at Northeastern University | ELVTR | Grow with Google - Data Analytics #Analytics #DataStorytelling

  • View profile for Lakshmi Supriya

    Data detective || Storytelling on science and innovation || Strategic thinker driving growth by connecting the dots

    1,318 followers

    I had the opportunity to meet and spend some time talking to expert data storyteller Brent Dykes at the recently concluded 2nd IP Analytics Community of Practice Annual Symposium in Rio de Janeiro and took away some great insights. As we were chatting before his keynote address on data storytelling, he said, data stories are not stories with data. Wait, what? Seeing the confusion on my face, he went on. A data story begins with the data. The narrative is based on what the data is telling you. A story with data already has a narrative that does not necessarily originate in the data, but rather the data is shown to support the story. I looked back to the time I was working as a scientist, the insights from the data generated from my experiments would form the story and get written up as a journal paper. When I was working as a science journalist, these journal papers along with interviews with scientists would form the story, the data coming in indirectly. This very fine distinction between the play of words made absolute sense. If you are someone who investigates data and are called upon to present your insights, it is very important to make this distinction. Here are some tips that might help: ✅ View the data without any preconceived assumptions. Any initial bias you have will color the story you are trying to find. ✅ Statistics are not data stories. They are the “what” you are seeing. Ask yourself, “why” am I seeing these numbers. The real story is in the why. ✅ Remove the noise. If there are too many threads to your data story, it confuses the audience who will stop paying attention. Think of the times when you put down a novel half way because of too many subplots. ✅ Not all data have stories to tell. Great data stories shift perspectives and change how we see the world. If yours doesn't create that "aha!" moment, consider it supporting material instead. Do you have more tips for data storytelling? Do share in the comments. #DataStory #DataStorytelling

  • View profile for Iain Brown PhD

    AI & Data Science Leader | Adjunct Professor | Author | Fellow

    36,559 followers

    🚀 Transforming Data Into Stories That Inspire Action In today’s data-driven world, numbers alone don’t drive decisions—stories do. In the latest edition of The Data Science Decoder, I explore the art of data storytelling: how to craft narratives that turn raw insights into meaningful actions. 🔍 Discover how to: - Connect with your audience by humanizing data. - Use visuals and narratives to make insights unforgettable. - Overcome common challenges like data overload and audience skepticism. 💡 Plus, I share real-world examples, practical tips, and the role of emerging technologies like Generative AI in reshaping storytelling. Whether you're presenting to executives, designing dashboards, or shaping business strategy, mastering this skill is no longer optional—it’s essential. 📰 Read the full article here: #DataScience #Storytelling #AI

  • View profile for Aditi Singh

    Publishing daily updates on current affairs, communication tips and business case studies | Deloitte USI | IIM Shillong | Certified Lean Six Sigma Green Belt

    3,730 followers

    Data alone can often feel impersonal and hard to relate to but professionals have found an interesting way around it - at least in the consulting world. I found it interesting that Bain & Company tackles this by using "customer journey mapping" - an approach that transforms data into vivid narratives about relatable customer personas. The process starts by creating detailed personas that represent key customer groups. For example, when working on the UK rail network, Bain created the persona of "Sarah" - a suburban working mom whose struggles with delays making her miss her daughter's events felt all too real. With personas established as protagonists, Bain meticulously maps their end-to-end journeys, breaking it down into a narrative arc highlighting every interaction and pain point. Using techniques like visual storyboards and real customer anecdotes elevates this beyond just experience mapping into visceral storytelling. The impact is clear - one study found a 35% boost in stakeholder buy-in when Bain packaged its conclusions as customer journey stories versus dry analysis. By making customers the heroes and positioning themselves as guides resolving their conflicts, Bain taps into the power of storytelling to inspire change. Whether mapping personal experiences or bringing data to life, leading firms realize stories engage people and shape beliefs far more than just reciting facts and figures. Narratives make even complex ideas resonate at a human level in ways numbers alone cannot.

Explore categories