Research Methods

Explore top LinkedIn content from expert professionals.

  • View profile for Allie K. Miller
    Allie K. Miller Allie K. Miller is an Influencer

    #1 Most Followed Voice in AI Business (2M) | Former Amazon, IBM | Fortune 500 AI and Startup Advisor, Public Speaker | @alliekmiller on Instagram, X, TikTok | AI-First Course with 200K+ students - Link in Bio

    1,605,580 followers

    Had to share the one prompt that has transformed how I approach AI research. 📌 Save this post. Don’t just ask for point-in-time data like a junior PM. Instead, build in more temporal context through systematic data collection over time. Use this prompt to become a superforecaster with the help of AI. Great for product ideation, competitive research, finance, investing, etc. ⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰ TIME MACHINE PROMPT: Execute longitudinal analysis on [TOPIC]. First, establish baseline parameters: define the standard refresh interval for this domain based on market dynamics (enterprise adoption cycles, regulatory changes, technology maturity curves). For example, AI refresh cycle may be two weeks, clothing may be 3 months, construction may be 2 years. Calculate n=3 data points spanning 2 full cycles. For each time period, collect: (1) quantitative metrics (adoption rates, market share, pricing models), (2) qualitative factors (user sentiment, competitive positioning, external catalysts), (3) ecosystem dependencies (infrastructure requirements, complementary products, capital climate, regulatory environment). Structure output as: Current State Analysis → T-1 Comparative Analysis → T-2 Historical Baseline → Delta Analysis with statistical significance → Trajectory Modeling with confidence intervals across each prediction. Include data sources. ⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰⏰

  • View profile for Shakra Shamim

    Business Analyst at Amazon | SQL | Power BI | Python | Excel | Tableau | AWS | Driving Data-Driven Decisions Across Sales, Product & Workflow Operations | Open to Relocation & On-site Work

    188,506 followers

    In my 2 years of navigating #Data_Analyst #interviews with leading product-based companies, I've decoded a consistent strategy for cracking business case study problems. Sharing here for those prepping for the big leagues: 1. Summarize the Question - First things first, echo the challenge. This not only showcases your grasp of the issue but sets the stage for your analytical journey. 2. Verify the Objective - Zoom in on what success truly means for the scenario at hand. It's like picking the right target before drawing the bow. 3. Ask Clarifying Questions - Dig deeper. Every detail can be a golden nugget in understanding the multifaceted context of the business puzzle. 4. Label the Case and Lay Out Your Structure - Identify the case type and map out your attack plan. A structured framework is your best ally in a sea of data. 5. State Your Hypothesis - Lead with an educated guess. It not only directs your subsequent analysis but also signals your critical thinking prowess. Whether you're a fresh grad or a seasoned professional, mastering these steps is key to presenting your analytical acumen in interviews. Remember, every case is unique, and flexibility is your friend. These steps are just the beginning of the conversation.Would love to hear how others approach case studies in interviews! Share your thoughts below. 👇 Follow Shakra Shamim for more such posts. #BusinessCaseStudy #InterviewPreparation #DataAnalyst

  • View profile for Bahareh Jozranjbar, PhD

    UX Researcher @ Perceptual User Experience Lab | Human-AI Interaction Researcher @ University of Arkansas at Little Rock

    8,154 followers

    Qualitative research in UX is not just about reading quotes. It is a structured process that reveals how people think, feel, and act in context. Yet many teams rely on surface-level summaries or default to a single method, missing the analytical depth qualitative approaches offer. Thematic analysis identifies recurring patterns and organizes them into themes. It is widely used and works well across interviews, but vague or redundant themes can weaken insights. Grounded theory builds explanations directly from data through iterative coding. It is ideal for understanding processes like trust formation but requires careful comparisons to avoid premature theories. Content analysis quantifies elements in the data. It offers structure and cross-user comparison, though it can miss underlying meaning. Discourse analysis looks at how language expresses power, identity, and norms. It works well for analyzing conflict or organizational speech but must be contextualized to avoid overreach. Narrative analysis examines how stories are told, capturing emotional tone and sequence. It highlights how people see themselves but should not be reduced to fragments. Interpretative phenomenological analysis focuses on how individuals make meaning. It reveals deep beliefs or emotions but demands layered, reflective reading. Bayesian qualitative reasoning applies logic to assess how well each explanation fits the data. It works well with small or complex samples and encourages updating interpretations based on new evidence. Ethnography studies users in real environments. It uncovers behaviors missed in interviews but requires deep field engagement. Framework analysis organizes themes across cases using a matrix. It supports comparison but can limit unexpected findings if used too rigidly. Computational qualitative analysis uses AI tools to code and group data at scale. It is helpful for large datasets but requires review to preserve nuance. Epistemic network analysis maps how ideas connect across time. It captures conceptual flow but still requires interpretation. Reflexive thematic analysis builds on thematic coding with self-awareness of the researcher's lens. It accepts subjectivity and tracks how insights evolve. Mixed methods meta-synthesis combines qualitative and quantitative findings to build a broader picture. It must balance both approaches carefully to retain depth.

  • View profile for Marc Harris

    Large-scale systemic change, Insight, Learning & everything in between

    19,233 followers

    Evaluation is a means to an end, not an end in itself Almost two decades on from the European Commission's 'Methodology Guidance' in 2006, they have released a substantive new handbook for evaluation. The handbook is designed to support people to prepare, launch and manage the evaluation process. "The two main purposes of evaluation – learning and accountability – lead to better and more timely decision-making, and enhance institutional memory on what works and what does not in different situations" The handbook is structured into four main chapters, each addressing different aspects of the evaluation process: 1️⃣ Introduces the role of evaluation, explaining key concepts, types, timing, and stakeholders involved in evaluations. 2️⃣ Provides practical guidance on managing evaluations through six phases: preparatory, inception, interim, synthesis, dissemination, and follow-up. 3️⃣ Delves into various evaluation approaches, methods, and tools, offering detailed explanations and examples. 4️⃣ Focuses on ethics in evaluation, emphasising the importance of conducting evaluations ethically and responsibly. Evaluation is more than measuring results. It’s about understanding why and how change happens—and ensuring that lessons learned lead to meaningful action. But too often, evaluations fall into common pitfalls that limit their effectiveness: 🚨 The tick-box compliance trap – Conducting evaluations just because they are required, rather than seeing them as opportunities for learning and adaptation. 🚨 Over-simplification in logical frameworks – Change is rarely linear, yet many evaluations rely on rigid frameworks that don’t account for real-world complexity. 🚨 Feasibility blind spots – Asking questions that can’t be answered within reasonable time or resource constraints. 🚨 Ignoring the ‘bigger picture’ – Evaluations that focus solely on pre-defined indicators, missing the broader systemic or external influences on success or failure. 🚨 Lack of openness to criticism – If stakeholders aren’t willing to accept unfavourable findings, evaluations lose their power to drive real improvement. 🚨 Exclusion of marginalised voices – In rapidly changing or fragile contexts, failing to capture diverse perspectives can reinforce inequalities rather than address them. 🚨 Ethical risks – Poorly designed evaluations can put participants at risk or fail to protect their dignity and data. Done well, evaluation is a tool for better decision-making, sharper strategy, and more meaningful impact. The Evaluation Handbook reminds us that it's not just about proving success or failure—but about continuously learning and adapting for the future.

  • View profile for Kevin Hartman

    Associate Teaching Professor at the University of Notre Dame, Former Chief Analytics Strategist at Google, Author "Digital Marketing Analytics: In Theory And In Practice"

    23,981 followers

    Remember that bad survey you wrote? The one that resulted in responses filled with blatant bias and caused you to doubt whether your respondents even understood the questions? Creating a survey may seem like a simple task, but even minor errors can result in biased results and unreliable data. If this has happened to you before, it's likely due to one or more of these common mistakes in your survey design: 1. Ambiguous Questions: Vague wording like “often” or “regularly” leads to varied interpretations among respondents. Be specific—use clear options like “daily,” “weekly,” or “monthly” to ensure consistent and accurate responses. 2. Double-Barreled Questions: Combining two questions into one, such as “Do you find our website attractive and easy to navigate?” can confuse respondents and lead to unclear answers. Break these into separate questions to get precise, actionable feedback. 3. Leading/Loaded Questions: Questions that push respondents toward a specific answer, like “Do you agree that responsible citizens should support local businesses?” can introduce bias. Keep your questions neutral to gather unbiased, genuine opinions. 4. Assumptions: Assuming respondents have certain knowledge or opinions can skew results. For example, “Are you in favor of a balanced budget?” assumes understanding of its implications. Provide necessary context to ensure respondents fully grasp the question. 5. Burdensome Questions: Asking complex or detail-heavy questions, such as “How many times have you dined out in the last six months?” can overwhelm respondents and lead to inaccurate answers. Simplify these questions or offer multiple-choice options to make them easier to answer. 6. Handling Sensitive Topics: Sensitive questions, like those about personal habits or finances, need to be phrased carefully to avoid discomfort. Use neutral language, provide options to skip or anonymize answers, or employ tactics like Randomized Response Survey (RRS) to encourage honest, accurate responses. By being aware of and avoiding these potential mistakes, you can create surveys that produce precise, dependable, and useful information. Art+Science Analytics Institute | University of Notre Dame | University of Notre Dame - Mendoza College of Business | University of Illinois Urbana-Champaign | University of Chicago | D'Amore-McKim School of Business at Northeastern University | ELVTR | Grow with Google - Data Analytics #Analytics #DataStorytelling

  • View profile for Bill Staikos
    Bill Staikos Bill Staikos is an Influencer

    Advisor | Consultant | Speaker | Be Customer Led helps companies stop guessing what customers want, start building around what customers actually do, and deliver real business outcomes.

    24,186 followers

    I think about Jeff Bezos's "start with the press release and work backward" approach. Here is a future headline I would like to see: "Surveys are no longer the primary tool for gathering insights." To get there, surveys will have had to evolve into precision instruments used strategically to fill gaps in data. Let's call this the "Adaptive Survey." With adaptive surveys, organizations can target key moments in the customer or employee journey where existing data falls short. Instead of overwhelming consumers and employees with endless, and meaningless, questions, surveys step in only when context is missing or deeper understanding is required. Imagine leveraging your operational data to identify a drop in engagement and deploying an adaptive survey to better understand and pinpoint the "why" behind it. Or, using transactional data to detect unusual purchasing behavior and triggering a quick, personalized survey to uncover motivations. Here's how I hope adaptive surveys will reshape insight/VoC strategies: Targeted Deployment: Adaptive surveys appear at critical decision points or after unique behaviors, ensuring relevance and avoiding redundancy. Data-First Insights: Existing operational, transactional, and behavioral data provide the foundation for understanding experiences. Surveys now act as supplements, not the main course of the meal. Contextual Relevance: Real-time customization ensures questions are tailored to the gaps identified by existing data, enhancing both response quality and user experience. Strategic Focus: Surveys are used to validate hypotheses, explore unexpected behaviors, or uncover latent needs...not to rehash what’s already known. Surveys don't have to be the blunt instrument they are today. They can be a surgical tool for extracting insights that existing data can’t reach. What are your thoughts? #surveys #customerexperience #ai #adaptiveAI #customerfeedback #innovation #technology

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer
    217,004 followers

    🧪 How To Resolve Conflicting Data and UX Research. With practical techniques on how to triangulate and reconcile data with mixed-method UX research ↓ 🤔 Data always tells a story — but it’s never just a single story. ✅ Quantitative data ← What/When: behavior patterns at scale. ✅ Qualitative data ← Why/How: user needs and motivations. ↳ Quant usually comes from analytics, surveys, experiments. ↳ Qual comes from tests, observations, open-ended surveys. 🚫 When data disagrees, it doesn’t mean that either is wrong. ✅ Different perspectives reveal different parts of a bigger story. ✅ Usually it means that there is a missing piece of the puzzle. ✅ Reconcile data: track what’s missing, omitted or overlooked. ✅ Triangulate: cross-validate data with mixed-method research. 🚫 Teams often overestimate the weight of big numbers (qual). 🚫 Designers often overestimate what people say and do (quant). ✅ Establish quality thresholds for UX research (size, sample). ✅ Find new sources: marketing, support, customer success. ✅ Find pairings of qual/quant streams, then map them together. People tend to believe what they want to believe. This goes for personal decisions, but also for any conducted research. If it shows the value of a decision already made, there will be people embracing it at full swing and carrying it forward fiercely, despite obvious blunders and weak spots. And sometimes, once a decision has been made, people find a way to frame insights from the past into their new narrative, inflating value of their initiatives. The best thing you can do is to establish well-defined thresholds for research — from confidence intervals (95%) and margin of error (<5%) to selecting user profiles and types of research. Risk-averse teams tend to overestimate the weight of big numbers in quantitative research. Users tend to exaggerate the frequency and severity of issues that are critical for them. So as Archana Shah noted, designers get carried away by users’ confident responses and potentially exaggerate issues, sometimes even the wrong ones. Raise a red flag once you notice decisions made on poor research, or hasted conclusions drawn from good research. We need both qual and quant — but we need both to be reliable. And: it’s not that one is always more reliable than another — they just tell different parts of a whole story that isn’t completed yet. ✤ Useful resources Mixed-Method UX Research, by Raschin Fatemi https://lnkd.in/eb3xsQ-B What To Do When Data Disagrees, by Archana Shah https://lnkd.in/ejt2E-Cc [More useful resources in the comments ↓]

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,576 followers

    "We present a novel agent architecture that simulates the attitudes and behaviors of 1,052 real individuals--applying large language models to qualitative interviews about their lives, then measuring how well these agents replicate the attitudes and behaviors of the individuals that they represent. The generative agents replicate participants' responses on the General Social Survey 85% as accurately as participants replicate their own answers two weeks later, and perform comparably in predicting personality traits and outcomes in experimental replications. Our architecture reduces accuracy biases across racial and ideological groups compared to agents given demographic descriptions. This work provides a foundation for new tools that can help investigate individual and collective behavior." Joon Sung ParkCarolyn Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith MorrisRobb WillerPercy Liang, Michael S. Bernstein

  • View profile for Pam Cusick

    SVP at Rare Patient Voice | Strategy, Client Solutions, Patient Advocacy

    18,488 followers

    Lately, the conversations I have had with researchers in both market research and clinical trial recruitment have focused on how we can encourage more patients to participate. Providing a good experience makes a big difference, but is that enough? While chatting with Jerry Nicholson, founder of Accessible Data, I learned about a unique approach to making surveys accessible. Online surveys are everywhere. The main problem with conventional surveys is that they present questions in a static way which often fails to meet the requirements of the person completing the survey.  Accessible Surveys changes this approach. With this Accessible Surveys, participants use an on-line form to choose how questions are presented. As well as being able to navigate a form using their preferred screen reader, respondents can have questions signed or presented in an Easy Read format. They can listen to questions being read aloud and answer open ended questions by leaving a voice message. Accessible Surveys was developed in collaboration with the International Disability Alliance and its members and is being used by organizations such as Unicef and the Ministry of Education, Ireland to collect richer, more representative data. With an estimated 1.3 billion people – or 16% of the global population – experiencing a significant disability, this is a great start to including more patients in research! What tools do you use to ensure accessibility? Drop them in the comments!

  • View profile for Antonio Vieira Santos
    Antonio Vieira Santos Antonio Vieira Santos is an Influencer

    Sociologist & Innovation Broker | Accessibility & Digital Inclusion Leader | CxO Advisor | Co-founder AXSChat & Digital Transformation Lab | Future of Work & Sustainability | 🏆 European Digital Mindset Award Winner

    18,035 followers

    Revolutionizing Behavioral Simulations with Generative Agents. What if we could accurately simulate human attitudes and behaviors to design better policies, test societal interventions, or solve complex social challenges—without relying on stereotypes or outdated assumptions? A recent study, "Generative Agent Simulations of 1,000 People," takes an exciting leap in this direction. It introduces interview-based generative agents powered by advanced large language models (LLMs). These agents are trained on detailed, narrative-rich qualitative interviews, enabling them to reflect the complexity of human behavior in ways traditional models simply cannot. 🔑 Highlights from the Research - Unmatched Accuracy: These agents achieved 85% normalized accuracy on social survey predictions, approaching human-level consistency. - Reduced Bias: By grounding agents in personal stories, the study significantly reduces racial, gender, and political bias compared to demographic-based methods. - Game-Changing Applications: These agents could transform fields like public policy, behavioral research, and AI by enabling simulations that are both realistic and ethical. 🤔 But There’s More to Consider... While this innovation is groundbreaking, it’s also a solution looking for a problem. The study demonstrates the potential of generative agents but raises critical questions: - How do we ensure these models stay relevant in a world where societal attitudes and behaviors constantly shift? - Can this resource-intensive methodology scale to broader populations or global contexts? - And most importantly, how can we translate this impressive research into real-world applications that drive meaningful change? 💡 The Road Ahead To unlock the full potential of this research, we need to: 1️⃣ Identify specific, high-impact challenges (e.g., public health campaigns or policy design) where these agents can shine. 2️⃣ Develop real-world case studies to showcase tangible benefits. 3️⃣ Collaborate across disciplines—policymakers, social scientists, and AI experts—to align this technology with pressing societal needs. What do you think about the potential of generative agents to shape the future of behavioral modeling and decision-making? How can we ensure this technology is ethical, scalable, and impactful? Let’s discuss and explore how innovations like this are changing the game in AI and social sciences. #AI #GenerativeAgents #Innovation #BehavioralScience #SocialImpact #MachineLearning #FutureOfWork #Sociology #AIWithPurpose

Explore categories